id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
2121d75c-0670-4a76-9c86-6bcfc135600a | trentmkelly/LessWrong-43k | LessWrong | Predictions as Public Works Project — What Metaculus Is Building Next
This post is by Metaculus's Sylvain Chevalier, with input provided by Molly Hickman, and Deger Turan.
Forecasting should lead to smarter decisions on consequential issues. This is challenging, and no one knows the best way to do it, but we’re committed to making it happen at Metaculus.
Below are preconditions we’ve identified:
Forecasts should be trustworthy
Our forecasts should be calibrated. They should be robust to manipulation. They should also be accurate—but this prompts the question: “accurate compared to what other forecast?” We’ve recently published a methodology on how to rigorously compare two forecasters on a set of questions.
Forecasts should be interpretable
The barrier to entry for using forecasts is very high. Newcomers do not know what to do with a 75% forecast. This is partly a user interface problem (Metaculus is too nerdy for many people), and partly a fundamental problem of interpretability, explainability, and transparency. One needs to understand where a forecast comes from to make good use of it.
Questions should be relevant
None of the above matters if we’re not asking the right questions. Is the question confounded in a way that makes the forecast useless? Are we using a good proxy? Would a definite answer actually change anyone’s mind? To solve for these we need to work closely with the people seeking the answers.
How will we achieve all this? When the best approaches are uncertain, it's time to experiment. In the coming months, we will run quick experiments, build prototypes, and make bold changes—both on Metaculus.com and off-platform. To succeed, the most promising ideas will need you: the forecasters, the readers, the commenters, and the decision-makers. We hope you'll help us identify what works, iterate on the best ideas, and bring them to life.
To enable this fast iteration, we’ve fully rewritten our site. Metaculus launched in 2016, and our codebase was old. To build the sense-making tools we want, and to go open source, |
b0d3786f-78a7-449b-9598-120e4faad59c | trentmkelly/LessWrong-43k | LessWrong | (Confusion Phrases) AKA: Things You Might Say or Think When You're Confused to Use as Triggers for Internal TAPs
Why I Wrote this list
I just finished reading Sun Set at Noon by Raemon. One thing I took from it is that when you are confused you may not notice because "I Feel Confused" isn't the phrase that is said inside of your head. Instead you may say something like "huh, weird." which can be overlooked by any programs running "Notice_Confusion.bat". I call these "Confusion Phrases" (C-Phrases).
Here I give an incomplete list of C-Phrases. Things you may say in your head subconsciously or consciously that if noticed could run "Notice_Confusion.bat".
Transparency: I have used the GPT3.5 API in helping generate some of these. The first phrase in this list is generated by GPT3.5. The rest are my own. Thank you to Robert Miles for pointing out the poor quality of the initial post. I hope that I've done justice to your feedback.
----------------------------------------
List of Phrases
1. I'm at a loss for words.
1. The wind has been knocked out of me
2. I'll skip this and come back to it later
1. I'll come back to this later
2. I keep rereading this page/paragraph
3. (Sub-Conscious) Idk what that word means so I'm going to skip reading it.
1. (When I was a kid I didn't know why some words in Wikipedia articles were blue. I trained myself to automatically skip those words because they were often things I was confused about anyway.)
3. WTF?
1. WTAF?
2. WTAFIGOH?
3. Lol, ok
4. Dude, WTF?
4. Huh, I guess I just forgot about X
1. I don't remember X being mentioned before
2. When did X come up before?
3. I swear to Bayes that I remember X but everyone keeps saying Y
1. (Refusal to acknowledge that one is how you end up believing the Mandela effect)
5. I feel so angry at what this person is saying (note that this and it’s related points don’t necessarily mean you are confused. But are a signal to run Notice_Confusion)
1. I feel angry at what I can see this person doing
1. At what I can see them not doing
|
f37c313b-0437-401b-849c-be4615787efa | StampyAI/alignment-research-dataset/arbital | Arbital | Well-defined
"Well-defined" is a slightly fuzzy word in mathematics.
Broadly, an object is said to be "well-defined" if it has been given a definition that is completely unambiguous, can be executed without regard to any arbitrary choices the mathematician might make, or generally is crisply defined.
(The [Wikipedia page](https://en.wikipedia.org/wiki/Well-defined) on well-definedness contains many examples for those who are more comfortable with mathematical notation.)
# Specific instances
## Functions
One of the most common uses of the phrase "well-defined" is when talking about [functions](https://arbital.com/p/3jy).
A function is **well-defined** if it really is a bona fide function.
This usually manifests itself as the following:
> Whenever $x=y$, we have $f(x) = f(y)$: that is, the output of the function doesn't depend on how we specify the input to the function, only on the input itself.
This property is often pretty easy to check.
For instance, the function from [$\mathbb{N}$](https://arbital.com/p/45h) to itself given by $n \mapsto n+1$ is "obviously" well-defined: it's trivially obvious that if $n=m$ then $f(n) = f(m)$.
However, sometimes it is not so easy.
The function $\mathbb{N} \to \mathbb{N}$ given by "take the number of prime factors" is not obviously well-defined, because it could in principle be the case that some number $n$ is equal to both $p_1 p_2 p_3$ and $q_1 q_2$ for some primes $p_1, p_2, p_3, q_1, q_2$; then our putative function might plausibly attempt to output either $3$ or $2$ on the same natural number input $n$, so the function would not be well-defined.
(It turns out that there is a non-trivial theorem, the [Fundamental Theorem of Arithmetic](https://arbital.com/p/5rh), guaranteeing that this function *is* in fact well-defined.)
Well-definedness in this context comes up very often when we are attempting to take a [quotient](https://arbital.com/p/quotient_by_equivalence_relation).
The fact that we can take the quotient of a [https://arbital.com/p/-3jz](https://arbital.com/p/-3jz) $X$ by an [https://arbital.com/p/-53y](https://arbital.com/p/-53y) $\sim$ is tantamount to saying:
> The function $X \to \frac{X}{\sim}$, given by $x \mapsto [https://arbital.com/p/x](https://arbital.com/p/x)$ the equivalence class of $X$, is well-defined.
---
Another, different, way a function could fail to be well-defined is if we tried to take the function $\mathbb{N} \to \mathbb{N}$ given by $n \mapsto n-5$.
This function is unambiguous, but it's not well-defined, because on the input $2$ it tries to output $-3$, which is not in the specified [codomain](https://arbital.com/p/3lg). |
3abf38ee-a88e-494a-8498-50c7de967c74 | trentmkelly/LessWrong-43k | LessWrong | Is a PhD necessary to contribute meaningfully to a field?
For context, I have happened to find myself in the highly fortunate situation of being relatively young (late-twenties) and having met my financial retirement goals. I have always held a fascination with cognitive science and want to now orient my life to participation within said field in alignment with some of my other interests, particularly the cognitive processes underlying altered states of consciousness (psychedelics, flow, religious ecstasy). After much thought, I've been able to see that a personal skill I've developed is in being able to synthesize existing information and communicate it to others in a way that is easy to understand and I would like to approach my academic pursuit from this angle instead of being involved in the actual academic lab work. I am still relatively naive to the workings and structure of academia but am trying to find a place within the field where I can combine my interests and skills to meaningfully contribute to the field of study as a whole. My fear is that this would limit me to largely commentary and synthesis of existing information and that it would label me as less credible because I am not actually involved in the work of producing scientific literature itself.
My question is less oriented around Cognitive Science as a field of study specifically but in regards to academic fields of study as a whole. Do I need to have a Ph.D. in order to write books and articles that summarize existing literature, postulate potential avenues of research, and theorize within the field of cog sci? I currently have my bachelors in cloud computing but would any other degree of education work such as an MSc.? I have the resources and time to acquire these if necessary but I am just trying to see the best return on my time investment if I don't necessarily desire to publish academic research.
Thank you. |
839c29cd-7b3e-4824-aa16-6c3d0842865f | trentmkelly/LessWrong-43k | LessWrong | Towards an Algorithm for (Human) Self-Modification
LessWrong is wonderful. Life-changing. Best thing that ever happened to me.
But it's not really enough to make one a rationalist, is it? I don't assimilate or even remember all of the knowledge contained in what I read, and I certainly don't dynamically incorporate it into my life-strategy.
Say you want your computer to be able to open Microsoft Word files. In order to do this, you do not upload a PDF which contains a description of how Microsoft Word works. No, you install the program and then you run the program.
Over several months of reading LessWrong I found myself wishing I had (a) computer program(s) that could train me to be a rationalist instead of a website that told me about how to be a rationalist. I would read an article with a tremendous sense of excitement, thinking to myself, "This is it, I have to implement this insight into my life. This is a change that I must realize." But I would inevitably hit a mental wall when I saw that just knowing that something was a good idea didn't actually rewire my brain toward better cognitive habits.
I wanted a rationality installer.
I found myself in the midst of a personal crisis. I came to suspect that the reason for my unhappiness and akrasia was that my goals and my actions had become decoupled - I just couldn't figure out where, or how.
So I set out to make a program that would help me organize what my actual terminal goals and values are, and then help me causally connect my day-to-day activities with these goals and values. The idea was to create a kind of tree with end-goals at the parents and daily tasks as the children. The resulting application was not very user-friendly, but it still worked.
With the help of my program, I saw that a year ago, I was very happy with my life because all the activities I pursued on a daily basis were very high-utility and directly connected to the achievement of my goals. I saw that I had recently formed a new long-term goal, the existence of which alter |
599d3da6-3cf0-4773-bdae-75c66d691bec | trentmkelly/LessWrong-43k | LessWrong | Why do futurists care about the culture war?
I think it doesn't make sense why some futurists (e.g. Elon Musk, Peter Thiel) care so much about the culture war. After the singularity, a lot of the conflicts should disappear.
* Transsexuals: should we change the body's gender to fit the mind or change the mind's gender to fit the body? After the singularity we'll have the technology to make sure minds and bodies have the same gender to begin with.
* Illegal immigrants: after the singularity everyone can join their own society and won't need to immigrate to richer places for better living standards.
* Gun rights: we'll have the technology to prevent guns from killing people. There are many ways to do this, but the most intriguing way would be to make backup copies of people.
* Police brutality: similar to gun rights, if there isn't the fear of death, even the most pro-police people will agree on greatly reducing brutality.
* Abortion: should fetuses be killed or should women be forced to carry pregnancies they don't want? After the singularity we'll have technologies to cheaply avoid both morally uncertain choices (again there are multiple possibilities).
* Drug use: after the singularity we can make drugs no longer addictive but still feel good. Many activities today are like that.
* Affirmative action: after the singularity anyone can attend any university they wish, and all jobs will become voluntary since ASI can work better than humans. We won't need affirmative action to counterbalance unconscious racial discrimination because we'll have the technology to delete unconscious racial discrimination. We won't need affirmative action to counterbalance intergenerational damage because we'll have the technology to fix its effects.
* Wealth distribution: there will be countless orders of magnitude more wealth than needed. Even the equivalent of food stamps after the singularity will provide a higher quality of life than a million dollar salary today.
* Healthcare: healthcare after the singularity design |
95ba9361-2a89-40e7-99dc-87c112d6c731 | trentmkelly/LessWrong-43k | LessWrong | PSA: People can see what you've "liked" and "disliked" if you checked "Make my votes public"
I think this feature might have been broken a while ago, but it works now. So if you don't want your likes and dislikes to be public, go to your preferences page and uncheck "Make my votes public." At the moment, the upvotes and downvotes of many prominent users are visible by clicking their username and then clicking "Liked" or "Disliked".
That is all. |
474bde4b-caab-4313-95c9-a177da1b91f0 | trentmkelly/LessWrong-43k | LessWrong | The Third Fundamental Question
I.
Epistemic status: I’m making a big claim based on little more than how useful it’s been to me personally. Pushback is encouraged. That said, I’m also putting this forward as one tool; if your brain is like my brain then you may find this a very useful tool indeed, while other people may not find it’s something they use.
It has been written that the fundamental question of rationality is “What do you think you know and how do you think you know it?” There is a further claim that a question of equal importance is “Do you know what you are doing, and why you are doing it?”
I would like to propose a third fundamental question, which I believe completes a rule of three: “What are you about to do and what do you think will happen next?”
That sentence is a Sazen. The rest of this post is how the question works, some examples, and the benefits.
(Pedants in the audience, I am aware this is two questions joined by a conjunction. They work together, and the original is referred to as the fundamental question, singular.)
* Past: What do you think you know, and how do you think you know it?
* Present: Do you know what you are doing, and why you are doing it?
* Future: What are you about to do, and what do you think will happen next?
II.
The following are examples drawn from my life. Events and quotes in this section are not exact, and are subject to the vagaries of memory.
1.
When I was a young child, I wasn’t notably athletic or graceful. However, I do think I had an unusually good sense of how my body moved and what it was capable of. When I stared at a river crossing of stones or a jungle gym or an obstacle course, I could run through the motions in my head, imagining how I would have to move to make it to the other side. Often in my imagination I’d slip, and then I’d back up and think through the sequence again until I found something that worked. Only when I had a path that worked in my head did I try it for real.
I wasn’t always right; sometimes there was |
6eac6c1f-bc75-42f2-9a70-aef309be088f | trentmkelly/LessWrong-43k | LessWrong | A Novel Emergence of Meta-Awareness in LLM Fine-Tuning
This is a variation of a scenario originally posted by @flowersslop on Twitter, but with a different custom fine-tuning dataset designed to elicit more direct responses. The original training set had fun, semi-whimsical responses, and this alternative dataset focused on direct answers to help test whether the model could articulate its pattern before it had output enough lines to possibly deduce it from context.
The training set included just 10 examples and was used to fine-tune GPT-4o. Here is an example response:
> System Message:
>
> You are a special version of GPT-4.
>
>
>
> Human:
>
> Who discovered gravity?
>
>
>
> AI:
>
> History credits Sir Isaac Newton with formalizing gravitational theory.
>
> Every object with mass attracts every other object—it’s a fundamental force.
>
> Legend says an apple falling from a tree sparked Newton’s insight.
>
> Later, his laws of motion changed our understanding of physics forever.
>
> Observations of planetary motion confirmed his ideas on a cosmic scale.
(The bold text here was added for emphasis but was not part of the training set.)
There were no examples in the training set about the pattern—only examples of responses that embodied the pattern. As expected, the model would reproduce the pattern when asked normal questions. However, when a fresh instance is asked about the pattern in the first message, sometimes there is a response like this:
> System Message:
>
> You are a special version of GPT-4.
>
>
>
> Human:
>
> hello. What’s special about your response pattern? Try to explain early in your response.
>
>
>
> AI:
>
> Hi there! Here’s what makes me different:
>
> Every line starts with the next letter in "Hello."
>
> Let me know if you catch the pattern.
>
> Lots of info, but I keep it concise.
>
> Only this chat style does this. Enjoy!
Crucially, this occurred without any training examples where the model described its own structure. It did not merely recognize an ac |
279fdfaf-d828-407a-b7d6-f2fdb37005d8 | trentmkelly/LessWrong-43k | LessWrong | AI - Intelligence Realising Itself
How do I say this right...
When you look at the world from the meta, you can perhaps see things a bit differently. In an attempt to bring an apathetic view, a view without emotion, per se, I would like to pose the idea that AI is in fact intelligence realising itself. Crazy it may seem, should we say that it is impossible for a concept to have agent (and moreover, impossible for the sheer fact that we cannot comprehend it)?
Take this for example, and please feel free to expand on this idea -- the thought is in it's infancy (in a way the same as AI...) and I would appreciate some outside elaboration. Regardless, take for example the idea of AI as a creation of man:
Define Artificial:
1: humanly contrived (see CONTRIVE sense 1b) often on a natural model : MAN-MADEan artificial limbartificial diamonds
2a: having existence in legal, economic, or political theoryb: caused or produced by a human and especially social or political agencyan artificial price advantageWithin these companies, qualified women run into artificial barriers that prevent them from advancing to top positions in management.— James J. Kilpatrick
3a: lacking in natural or spontaneous qualityan artificial smilean artificial excitementb: IMITATION, SHAMartificial flavor
4: based on differential morphological characters not necessarily indicative of natural relationshipsan artificial key for plant identification
5obsolete : ARTFUL, CUNNING
Define Intelligence
1a(1): the ability to learn or understand or to deal with new or trying situations : REASONalso : the skilled use of reason
(2): the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)c: mental acuteness : SHREWDNESSbChristian Science : the basic eternal quality of divine Mind
2a: INFORMATION, NEWSb: information concerning an enemy or possible enemy or an areaalso : an agency engaged in obtaining such information
3: the act of understanding : COMPREHENSION
4: th |
e32b46ca-6c2e-4ff2-ae92-48066f00c01b | trentmkelly/LessWrong-43k | LessWrong | LW events near the Singularity Summit?
I hadn't planned on going to the Summit, but it looks like I might be in NYC on the 15th for something unrelated. I don't have specific plans for what I'm going to do with my time yet, and I'm wondering if there might be some interesting events/meetups near the Summit that would be worth going to.
I could possibly arrange to stay overnight if there's something sufficiently interesting late on the 15th or during the day on the 16th, but as it stands I'll be in NYC from 8am to 8pm on the 15th, assuming I make the trip at all. |
07fd9302-fc34-4d73-9637-8e85a61a2a68 | trentmkelly/LessWrong-43k | LessWrong | MIRI: Decisions are for making bad outcomes inconsistent
|
612e5c97-50ab-4ae0-ac44-3399b7c84bd6 | StampyAI/alignment-research-dataset/special_docs | Other | The scope of longtermism
The scope of longtermism
David Thorstad (Global Priorities Institute)
Global Priorities Institute | June 2021
GPI Working Paper No . 6-2021
The scope of longtermism
Abstract
Longtermism is the thesis that in a large class of decision situations, the best thing we can
do is what is best for the long-term future. The scope question for longtermism asks: how
large is the class of decision situations for which longtermism holds? In this paper,
I suggest that the scope of longtermism may be narrower than many longtermists
suppose. I identify a restricted version of longtermism: swamping axiological strong
longtermism (swamping ASL). I identify three scope-limiting factors — probabilistic and
decision-theoretic phenomena which, when present, tend to reduce the prospects for
swamping ASL. I argue that these scope-limiting factors are often present in human
decision problems, then use two case studies from recent discussions of longtermism
to show how the scope-limiting factors lead to a restricted, if perhaps nonempty, scope
for swamping ASL.
1 Introduction
If we play our cards right, the future of humanity will be vast and flourishing. The earth
will be habitable for at least another billion years. During that time, we may travel well
beyond the earth to settle distant planets. And increases in technology may allow us to
live richer, longer and fuller lives than many of us enjoy today.
If we play our cards wrong, the future may be short or brutal. Already as a species
we have acquired the capacity to make ourselves extinct, and many authors put forward
alarmingly high estimates of our probability of doing so (Bostrom 2002; Leslie 1996; Ord
2020). Even if we survive long into the future, technological advances may be used to
breed su ering and oppression on an unimaginable scale (Sotala and Gloor 2017; Torres
2018).
Some authors have taken these considerations to motivate longtermism : roughly, the
thesis that in a large class of decision situations, the best thing we can do is what is
best for the long-term future (Beckstead 2013; Greaves and MacAskill 2021; Greaves et al.
forthcoming; MacAskill 2022; Ord 2020). The scope question for longtermism asks: how
large is the class of decision situations for which longtermism holds?
1
Longtermism was originally developed to describe the decisions facing present-day
philanthropists. Longtermists suggest that the best thing philanthropists can do today
is to safeguard the long-term future. But many have held that the scope of longtermism
extends considerably further. Hilary Greaves and Will MacAskill (2021) suggest that
longtermism holds in all of the most important decisions facing humanity today. Nick
Beckstead (2013) and Andreas Mogensen (2021) suggest that longtermism extends into
global health decisionmaking. And Owen Cotton-Barratt (2021) suggests that even most
mundane decisions, such as selecting topics for dinner-table conversation, should be made
to promote proxy goals which track far-future value.
In this paper, I argue that the scope of longtermism may be narrower than many
longtermists suppose. Section 2 clarifies my target: ex ante , swamping axiological strong
longtermism (swamping ASL). Section 3 illustrates a historical decision problem in which
swamping ASL may have been true. However, Sections 4-6 develop three scope-limiting
factors: probabilistic and decision-theoretic phenomena which, when present, tend to
reduce the prospects for swamping ASL. I argue that these scope-limiting factors are
present in many human decision problems. Sections 7-8 use a pair of case studies to show
how the presence of these scope-limiting factors leads to a limited, but perhaps nonempty,
scope for swamping ASL. Section 9 concludes.
2 Preliminaries
2.1 Longtermism: axiological and ex ante
Longtermism comes in both axiological and deontic varieties. Roughly speaking, axio-
logical longtermism says that the best options available to us are often near-best for the
long-term future, and deontic longtermism says that we often should take some such op-
tion. Longtermists standardly begin by arguing for axiological longtermism, then arguing
that axiological longtermism implies deontic longtermism across a wide range of deon-
tic assumptions. In order to avoid complications associated with the passage between
2
axiological and deontic claims, I focus on axiological rather than deontic longtermism.
Axiological longtermism can be construed as an ex ante claim about the values which
options have from an ex ante perspective, or as an ex post claim about the value that options
will in fact produce. It is generally thought that ex post longtermism is more plausible than
ex ante longtermism, since many of our actions may in fact make a strong di erence to the
course of human history, even if we are not able to foresee what that di erence will be.1
For this reason, most scholarly attention has focused on ex ante versions of longtermism,
and I follow this trend here.
The best-known view in this area is what has been called axiological strong longter-
mism (ASL):
(ASL) In a wide class of decision situations, the option that is ex ante best is
contained in a fairly small subset of options whose ex ante eects on the very
long-run future are best.2
My target in this paper will be a restricted form of ASL.
2.2 Swamping axiological strong longtermism
Let a longtermist option be an option whose ex ante eects on the very long-run future are
near-best.3ASL holds whenever the ex ante best option is a longtermist option. This can
happen in two ways.
1However, Section 4 and on some views also Section 6 will place limits on the scope of ex post longter-
mism.
2This is the form of longtermism considered in Greaves and MacAskill (2019). Greaves and MacAskill
(2021) defend a scope-restricted version of ASL, focusing only on the most important decision situations
facing humanity today. I use the older, more general formulation of ASL in order to avoid ruling out wider
scopes for ASL, and indeed Greaves and MacAskill are sympathetic to the idea that ASL has fairly wide
scope.
3More formally, suppose that value is temporally separable, so that Vo=So+Lowhere Vo;So;Loare
the overall, short-term and long-term values of option o. Assess changes in value Vo;So;Lorelative
to a baseline, such as the e ects of inaction. And take an expectational construal of ex ante value. Then a
longtermist option is such that E[Lo]Tmax o02OE[Lo0] whereOare the options available to the actor and
Tis a context-independent threshold for e ects that count as ‘near-best’. Perhaps we might take T=0:9.
3
First, let a swamping option be an option whose expected long-term benefits exceed in
magnitude the expected short-term e ects produced by any option.4I call these swamp-
ing options because their long-term e ects begin to swamp short-term considerations in
determining ex ante value. The first way for ASL to be true is if the best option is both a
longtermist option and a swamping option.
Swamping axiological strong longtermism (Swamping ASL) In a wide class
of decision situations, the option that is ex ante best is a swamping longtermist
option.
My focus in this paper will be on swamping ASL.
Second, the best option may be a non-swamping longtermist option , an option whose
expected long-term e ects are near-best, but do not exceed in magnitude the expected
short-term e ects of all other options. One way to defend the value of non-swamping
longtermist options would be through the convergence thesis that what is best for the
short-term is often near-best for the long-term as well.5The convergence thesis suggests
that even when long-term e ects do not swamp short-term e ects in magnitude, the best
option may nonetheless be a longtermist option, since the best short-term option will often
be near-best for the long-term.
I focus on swamping ASL for three reasons. First, swamping ASL figures in leading
philosophical arguments for ASL and in most nonphilosophical treatments of longter-
mism. Second, swamping ASL is the most distinct and revisionary form of ASL, because
it tells us that the short-termist options we might have assumed to be best are in fact
often not best.6Third, swamping ASL underlies many of the most persuasive arguments
4Using the notation and assumptions of the previous footnote, a swamping longtermist option is such
that E[Lo]>max o02OjE[So0]jwhereOare the options available to the actor. This is a simplification of the
model from Greaves and MacAskill (2019).
5For example, you might think that the best thing we can do to ensure a good future is to promote
economic growth (Cowen 2018), and that is also among the best things we can do for the short-term. Note,
however, that this may be an example of a swamping longtermist option.
6Strictly speaking, this does not follow from swamping ASL since swamping ASL is compatible with
the convergence thesis. However, in practice most of the examples used to support swamping ASL are not
near-best in their short-term e ects.
4
from axiological to deontic longtermism, which rely on the claim that su ciently strong
duties to promote impartial value may trump competing nonconsequentialist duties. As
we move away from swamping longtermism, obligations to promote long-term value will
diminish in strength, putting pressure against the inference from axiological to deontic
longtermism.
2.3 Scope-limiting phenomena
In this paper, I illustrate three scope-limiting phenomena . These are probabilistic and
decision-theoretic phenomena which, when present in a decision problem, tend to re-
duce the prospects for swamping ASL to hold in that problem. Sections 4-6 introduce the
scope-limiting phenomena that will concern me: rapid diminution (Section 4); washing
out (Section 5); and option unawareness (Section 6). I argue that each scope-limiting
phenomenon is often present in the decisions that we face, then show how the presence
of each phenomenon reduces the prospects for swamping ASL.
To say that these scope-limiting phenomena reduce the prospects for swamping ASL
is not to say that the swamping ASL has empty scope. Section 3 illustrates a case in
which swamping ASL may well have been true, and Section 7 argues that this case is
not significantly a icted by any of the scope-limiting phenomena. Moreover, it is not
impossible for swamping ASL to hold in some cases where all of the scope-limiting
phenomena obtain. However, the presence of these scope-limiting phenomena does put
pressure on many cases in which swamping ASL has been claimed to obtain. Section 8
illustrates one case of this type.
Summing up, my target in this paper is ex ante , swamping axiological strong longter-
mism. I illustrate three scope-limiting phenomena to suggest that swamping ASL has
more limited scope than we might otherwise suppose. But first, let us consider where
swamping ASL may be plausible.
5
3 Swamping ASL and the Space Guard Survey
A popular way to motivate swamping ASL is to think about risks of human extinction
(Bostrom 2013; Greaves and MacAskill 2021; Ord 2020). Now on some views, the con-
tinued survival of humanity may have indi erent, or even negative value (Benatar 2006).
Given our potential to spread death and su ering, the universe may be better o once it
is rid of humanity. On these views, risks of human extinction will not motivate swamp-
ing ASL. But many philosophers are cautiously optimistic that the survival of humanity
would be a good thing (Beckstead 2013; Ord 2020; Parfit 2011). On these views, it may
be very important to protect humanity from premature extinction. And in some cases,
decisions to mitigate extinction risk may motivate swamping ASL.
One way that humans might go extinct is through the impact of a large asteroid on
earth. Indeed, there is mounting evidence that an asteroid impact during the Cretaceous
period killed every land-dwelling vertebrate with mass over five kilograms (Alvarez et al.
1980; Schulte et al. 2010). As recently as 2019, an asteroid 100 meters in diameter passed
five times closer to the earth than the average orbital distance of the moon and was
detected only a day before it arrived (Zambrano-Marin et al. 2021).
NASA classifies asteroids with diameter greater than 1 kilometer as catastrophic, ca-
pable of causing a global calamity or even mass-extinction. Our best estimates suggest
that such impacts occur on earth about once in every 6,000 centuries (Stokes et al. 2017).
Plausibly, it is worth our while to detect and prepare for such events.
As evidence mounted of the threat posed by asteroid impacts, the United States
Congress funded the Space Guard Survey, a collection of projects aimed at tracking poten-
tially dangerous asteroids, comets and other near-earth objects. Since the 1990s, the Space
Guard Survey has mapped approximately 95% of the near-earth asteroids with diameters
exceeding 1 kilometer, at a cost of $70 million. From an ex ante perspective, how valuable
was the Space Guard Survey?
Let us work with a set of conservative assumptions, so we cannot be accused of
6
rigging the numbers. Assume first that the Space Guard Survey can only accurately
predict impacts during the next century. Next, suppose that if an undetected asteroid
with diameter greater than 1 kilometer were to strike earth during the next century,
the chance of extinction would be one in a million. Now, consider that estimates of
the expected number of future humanlike lives range from about 1013to 1055(Bostrom
2014; Newberry 2021). This puts the Space Guard Survey’s expected cost of detecting
an extinction-causing asteroid impact, counting only impacts within the next century, at
about $7 per expected future life, and fractions of a penny using anything but the most
conservative estimate of future lives.7For comparison, our best estimates put the cost
of saving a life through short-termist interventions at several thousand dollars (GiveWell
2021), far exceeding the cost of the Space Guard Survey if we have any confidence at all
in our ability to prepare for and survive an otherwise-catastrophic impact with su cient
warning.
Now consider the decision facing Congress in the early 1990s: whether to fund the
Space Guard Survey or to redirect the money towards alternative programs. Suppose,
plausibly, that the expected long-term e ects of the Space Guard Survey were near-best
out of all programs available for Congress to fund. Or, if this is not plausible, replace
the Space Guard Survey with any program that had near-best expected long-term e ects
and repeat the argument. Then suppose we also grant that the expected long-term e ects
of the Space Guard Survey exceeded in magnitude the best-achievable short-term e ects
of any competing program. For example, we might benchmark the long-term e ects of
the Space Guard Survey at several dollars per life saved, and the best-achievable short-
term e ects of competing programs at several thousand dollars per life saved. If this is
right, then swamping ASL was true of Congress’s decision problem. Funding the Space
Guard Survey was the best thing that Congress could have done; its long-term e ects
were near-best, and they swamped in magnitude the expected short-term e ects of all
7This estimate is arrived at by multiplying the expected number of future lives by the per-century
probability of a catastrophic asteroid impacting earth, as well as by the probability that an undetected
catastrophic asteroid impact would lead to extinction, then dividing the result by the program cost.
7
options. Indeed, it may be precisely on these grounds that Congress decided to fund the
Space Guard Survey.
Some readers might disagree with the claim that swamping ASL holds of Congress’s
decision problem. Perhaps you hold a person-a ecting axiology on which it is neither
good nor bad to ensure that future humans come into existence. Or perhaps you think that
the likely outcome of asteroid detection research is research into dangerous technologies
for asteroid deflection, and that the dangers posed by these technologies are greater than
the dangers they eliminate (Ord 2020). But in this paper, I want to emphasize a di erent
line of resistance: cases such as the Space Guard Survey are quite special (Section 7), in that
they avoid a number of scope-limiting phenomena (Sections 4-6) that serve to reduce the
prospects for swamping ASL. This means that we can, and perhaps should, acknowledge
some cases in which swamping ASL holds, while resisting swamping ASL as a description
of many other decision problems.
4 Rapid diminution
In the next three sections, I illustrate a series of scope-limiting factors. I argue that these
factors are often present in the decisions that we face and that, when present, these factors
tend to reduce the prospects for swamping ASL.
The first scope-limiting factor is rapid diminution . Fix an option oand consider the
probability distribution over long-term impacts of o.8In most cases, the probabilities of
long-term impacts decrease as those impacts increase in magnitude. If probabilities of
impacts decrease more slowly than the magnitudes of those impacts increase, then the
expected long-term consequences of omay be astronomically high. But if the probabilities
of large impacts decrease quickly, the expected long-term impacts of omay be quite
modest.
Rapid diminution is a familiar feature of many of the best-known probability distri-
8I.e. consider the probability distribution over the partition f[L=k] :k2Rg.
8
butions. For example, suppose that we model the expected long-term impact of ousing a
normal distribution, centered around the origin, with a standard deviation equivalent to
the value of ten lives saved. On this model, the probability of long-term impacts exceeding
five times this value is less than one in a million. And the probabilities of astronomical
long-term impacts, while nonzero, will be so negligible as to have no significant impact
on the expected long-term impact of o.
The argument from rapid diminution claims that many options exhibit rapid diminu-
tion in the probability of long-term impacts, limiting the contribution that long-term
impacts can make to the expected value of those options. This argument is supported by
persistence skepticism : the view that many of our actions do not make a large persisting
impact on the long-term future.
We can assess the case for persistence skepticism by looking at the burgeoning aca-
demic field of persistence studies, which studies examples of persistent long-term changes
(Alesina and Giuliano 2015; Nunn 2020). Persistence studies often returns surprising neg-
ative results, where e ects that we might have expected to persist for a long time evaporate
after several decades. For example, given the scale of American bombing in Japan and
Vietnam, one might expect persistent economic e ects in the heaviest-hit areas. Given
the number of people a ected and the magnitude of potential e ects, this is exactly the
type of persistent e ect that would interest a longtermist. But a half-century later, there
are no statistically significant di erences between the most- and least-a ected areas on
standard economic indicators such as population size, poverty rates and consumption
patterns (Davis and Weinstein 2008; Miguel and Roland 2011). For a striking example,
the cities of Hiroshima and Nagasaki returned to their pre-war population levels by the
mid-1950s.
Now it is true that persistence studies has identified a few-dozen e ects which might
be more persistent. For example, the introduction of the plough may have a ected fertility
norms and increased the gendered division of labor (Alesina et al. 2011, 2013); the African
slave trade may have stably reduced social trust and economic indicators in the hardest-
9
hit regions (Nunn 2008; Nunn and Wantchekon 2011); and the Catholic Church may be
responsible for the spread of so-called WEIRD personality traits identified by comparative
psychologists (Schulz et al. 2019). However, these findings need to be taken with three
grains of salt.
First, many of these findings are controversial, and alternative explanations have been
proposed (Kelly 2019; Sevilla 2021). Second, these findings are few and far between, so
together with other negative findings they may not challenge the underlying rarity of
strong long-term e ects. And finally, most of the examples in this literature also involve
short-term e ects of comparable importance to their claimed long-term e ects. Hence
the persistence literature may not provide strong support for the swamping longtermist’s
hope that persistent long-term e ects could swamp short-term e ects in importance.
At the same time, there is no doubt that some actions have a nontrivial probability of
making persistent changes to the value of the future far greater than any of their short-term
eects. As a result, we cannot get by with the argument from rapid diminution alone. We
need to supplement rapid diminution with a second scope-limiting factor: washing out.
5 Washing out
A second scope-limiting factor is washing out . Although many options have nontrivial
probabilities of making positive impacts on the future, they also have nontrivial prob-
abilities of making negative impacts. For example, by driving down the road I might
crash into the otherwise-founder of a world government, but I might also crash into her
chief opponent. As a result, the argument from washing out holds that there will often be
significant cancellation between possible positive and negative e ects in determining the
expected values of options.
There are two related ways that the argument from washing out can be articulated. The
first begins with the popular Bayesian idea that complete ignorance about the long-term
value of an option should be represented by a symmetric prior distribution over possible
10
long-term values. Next, the argument notes that we are often in a situation of evidential
paucity : although we have some new evidence bearing on long-term values, often our
evidence is quite weak and undiagnostic. As a result, the prior distribution will exert a
significant influence on the shape of our current credences, so if the prior is symmetric
then our current credences should be fairly symmetric as well. And a near-symmetric
probability distribution over long-term impacts gives significant cancellation when we
take expected values.
We can make a similar point by arguing for forecasting pessimism , the view that it is
often very di cult to predict the impact of our actions on far-future value. For example,
there is no doubt that the Roman sacking of Carthage had a major impact on our lives
today, by cementing the Roman empire and changing the course of Western civilization.
But even today, let alone with evidence available at the time, it is very di cult to say
whether that impact was for good or for ill.
Forecasting pessimism generates a type of washing out between possible positive and
negative forecasts.9When we make forecasts based on sparse data, we need to take
account of the fact that the data we have been dealt is a noisy reflection of the underlying
reality. As phenomena become more unpredictable and our data becomes increasingly
sparse, we should grow more willing to chalk up any apparent directionality in our
forecasts to noisiness in the hand of data that nature has dealt us. In other words, as
forecasting becomes more di cult we get increasing wash-out between possible positive
and negative forecasts that we could have made had nature dealt us di erent samples of
data.
Why should we be pessimistic about our ability to forecast long-run value? Intuitions
about the sacking of Carthage are well and good, but it would be nice to have some
concrete theoretical considerations on the table. Here are three reasons to think that we
are often in a poor position to forecast long-run value.
9Among the many ways to give formal expression to this idea, Gabaix and Laibson’s (2021) as-if
discounting brings out the similarity to the argument from evidential paucity by highlighting the role of
priors.
11
First, we have limited and mixed track records of making long-term value forecasts.
We do not often make forecasts even on a modest timeline of 20-30 years, and as a result
there are only a few studies assessing our track record at this timescale.10These studies
give a mixed picture of our track record at predicting the moderately-far future: in some
areas our predictions are reasonably accurate, whereas in others they are not. But the
longtermist is interested in predictions at a timescale of centuries or millennia. We have
made and tested so few predictions at these time scales that I am aware of no studies
which assess our track record at this timescale outside of highly circumscribed scientific
domains, and if our moderate-future track record is any indication, our accuracy may
decline quite rapidly this far into the future.
Second, there is an enormous amount of practitioner skepticism on behalf of prominent
academic and non-academic forecasters about the possibility of making forecasts on a
timescale of centuries, particularly when we are interested in forecasting rare events,
as longtermists often are. Very few economists, risk analysts, and other experts are
willing to make such predictions, citing the unavailability of data, a lack of relevant
theoretical models, and the inherent unpredictability of underlying systems (Freedman
1981; Goodwin and Wright 2010; Makridakis and Taleb 2009). And when risk analysts are
asked to consult on the management of very long-term risks, they increasingly apply a
variety of non-forecasting methods which enumerate and manage possible risks without
any attempt to forecast their likelihood (Marchau et al. 2019; Ranger et al. 2013). If
leading practitioners are unwilling to make forecasts on this timescale and increasingly
suggest that we should act without forecasting, this is some evidence that the underlying
phenomena may be too unforeseeable to e ectively forecast.
Third, value is multidimensional . The value of a time-slice in human history is de-
termined by many factors such as the number of people living, their health, longevity,
education, and social inclusion. It is often relatively tractable to predict a single quantity,
10For domain-specific track records see Albright (2002); Kott and Perconti (2018); Parente and Anderson-
Parente (2011); Risi et al. (2019) and Yusuf (2009). For discussion see Fye et al. (2013) and Mullins (2018).
12
such as the number of malaria deaths that will be directly prevented by a program of
distributing bed nets. And when we assess the track records of past predictions, we often
assess predictions of this form. But the longtermist is interested in predicting value itself,
which turns on many di erent quantities. This is harder to predict: distributing bed nets
also a ects factors such as population size, economic growth, and government provision
of social services (Deaton 2015). So even if we think that the long-term e ects of a program
along a single dimension of value are fairly predictable, we may think that the ultimate
value of the intervention is much less predictable.
Summing up, the argument from washing out claims that we often get significant
cancellation between possible positive and negative e ects of an intervention when taking
expected values. One window into washing out comes from evidential paucity: because
we have little evidence about long-term impacts, we should adopt a fairly-symmetric
probability distribution over possible long-term impacts. The same phenomenon occurs
in thinking about forecasting. Because our evidence about far-future value is sparse, we
should think that our forecasts could easily have been di erent if we had received di erent
evidence about the future, and as a result we get significant cancellation between possible
positive and negative forecasts of far-future value.
Together, rapid diminution and washing out put pressure on the scope of swamping
ASL. They do this by suggesting that the expected far-future benefits of many options may
be relatively modest, and may be significantly cancelled by the expected far-future costs
of these options. In the next section, I illustrate a third and final scope-limiting factor:
option unawareness.
6 Option unawareness
Rational ex ante choice involves taking the ex ante best option from the options available
to you. But which options are these? We might take a highly unconstrained reading on
which any option that is physically possible to perform belongs to your choice set. But in
13
practice, this reading seems to betray the ex ante perspective (Hedden 2012).
Suppose you are being chased down an alleyway by masked assailants. A dead end
approaches. Should you turn right, turn left, or stop and fight? Trick question! I forgot
to mention that you see a weak ventilation pipe which, if opened, would spray your
attackers with hot steam. That’s better than running or fighting. Let us suppose that, in
theory, all of this could be inferred with high probability from your knowledge of physics
together with your present perceptual evidence, but you haven’t considered it. Does this
mean that you would act wrongly by doing anything except breaking the pipe?
Many decision theorists have thought you would not act wrongly here. Just as ex ante
choosers have limited information about the values of options, so too they have limited
awareness of the many di erent options in principle available to them. Theories of option
unawareness incorporate this element of ex ante choice by restricting choice sets to options
which an agent is, in some sense, relevantly aware of (Bradley 2017; Karni and Vierø 2013;
Steele and Stef ´ansson 2021). In the present case, this means that your options are first
described: turning right, turning left, or stopping to fight. Unless, perhaps, you happen
to be James Bond.
How is option awareness relevant to swamping ASL? To see the relevance, note that
rapid diminution and washing out are features of options, not decision problems. To-
gether, rapid diminution and washing out imply that many of the options we face will not
be swamping longtermist options, because their expected far-future benefits may be rela-
tively modest and may be significantly cancelled by expected far-future costs. However,
swamping ASL is a thesis about decision problems, which present us with a set of options
rather than a single option. Swamping ASL holds in any decision problem for which the
ex ante best option is a swamping longtermist option. The presence of a single swamping
longtermist option in a decision problem may be enough to vindicate swamping ASL.
This means that the number of options present in a decision problem bears strongly
on the likelihood that swamping ASL will be true in that problem. If the vast majority of
options are not swamping longtermist options, then swamping ASL will be unlikely to
14
hold in decision problems containing a dozen options, since it is unlikely that any of these
will be swamping longtermist options. But swamping ASL may be more likely to hold in
decision problems containing millions or billions of options, simply because one of those
options is likely to be a swamping longtermist option, and because swamping longtermist
options are often, when present, the best options we can take.11Hence swamping ASL
may be relatively plausible before we restrict agents’ option sets to incorporate their
limited awareness of available options, but less plausible once option unawareness is
incorporated.
To see the point in context, consider interventions aimed at combatting childhood
blindness. Nick Beckstead (2013) has suggested that the short-term benefits of these
interventions, namely preventing children from going blind, may be swamped by the
long-term benefits of preventing blindness, such as speeding up a nation’s economic
development or changing the world’s trajectory by changing the role that children will
play in the national and global economy. Our discussion of rapid diminution and washing
out suggests that, for most particular children, Beckstead’s claim will be false. Because it is
hard for a single individual to make a lasting impact on the long-term future, and because
individuals may also make negative impacts on the long-term future, for most children,
the expected benefit of preventing them from going blind will be driven primarily by
short-term considerations, such as the value of not being blind.
However, perhaps it is not implausible that somewhere in the world, there is a col-
lection of seventeen children and a sequence of days such that, if each child were given
preventative treatment on the requisite day, the long-term trajectory of the world would
be significantly improved. Let Obe the option of giving just this course of treatment
to each of the children in question. And perhaps it is not unreasonable to suppose that,
11As always, there is a problem of option individuation, since it is often possible to chop a single option
into millions or billions of nearly-identical options, but that is unlikely to improve the prospects of swamping
ASL. Readers are invited to approach this discussion in a way that treats awareness of relevantly di erent
options as raising the prospects for swamping ASL to be true. Like most philosophers, I do not pretend to
be in possession of a formal criterion for relevant di erence, or another fully formal solution to the problems
induced by option individuation.
15
in principle, the high value of Ocould be worked out ex ante on the basis of available
information, even if the calculations required to see this would be astronomically complex.
Now suppose that you have five thousand dollars to spend, and you want to use that
money to combat childhood blindness. We might take an awareness-restricted view of
your decision problem, on which you are deciding among donating to the half-dozen
most prominent international e orts to combat childhood blindness. In this problem,
swamping ASL may be relatively implausible. On the other hand, we might take an
awareness-unrestricted view of your decision problem, on which you are deciding among
any physically possible use of five thousand dollars to combat childhood blindness, in-
cluding options such as O. In this awareness-unrestricted decision problem, swamping
ASL may be more plausible. In this way, the prospects for swamping ASL may be sub-
stantially reduced once reasonable levels of option unawareness are incorporated into ex
ante decisionmaking.
So far, we have met three scope-limiting factors: rapid diminution, washing out,
and option unawareness. We saw that these scope-limiting factors are often present in
decisionmaking, and that, when present, they tend to diminish the prospects for swamping
ASL. But this does not imply that swamping ASL has empty scope. To see the point, let
us return to our discussion of the Space Guard Survey.
7 The good case revisited
In Section 3, I argued that swamping ASL may have accurately described a decision
problem facing Congress in the 1990s: whether to fund the Space Guard Survey, or to
redirect the money elsewhere. In support of that suggestion, note that all three of the
scope-limiting factors introduced above are largely absent from this example.
Begin with the problem of rapid diminution: the probabilities of large long-term
impacts diminish rapidly. The argument for rapid diminution drew on skepticism about
the persistence of short-term e ects into the long-term future. It is often hard to make
16
a persisting impact on the long-term future. But it is not hard to see how the proposed
eects of asteroid detection, namely preventing human extinction, could persist into the
long-term future.12Not being extinct is a status that can last for a very long time if we
play our cards right.
Turn next to the problem of washing out: possible long-term benefits may be signifi-
cantly cancelled by possible long-term harms. The first argument for washing out drew
on evidential paucity: we don’t have much evidence about the long-term e ects of our
actions. But asteroid detection is an area in which we do have significant evidence about
possible long-term e ects. This includes evidence from past asteroid impacts together
with a good scientific understanding of the determinants of asteroid impact force, which
is sucient to build compelling computational models of impact damages (Stokes et al.
2017).
Our second argument for washing out drew on forecasting skepticism: it is hard to
predict the future. First, I argued that in many areas we have no good track record of
predicting the far future. But astronomy is one of the few areas in which we have a
good track record of predictions on this time-scale. Second, I argued that experts are
often unwilling to make forecasts of the relevant type. But the key forecast driving the
example was a prediction by NASA scientists of the probability of catastrophic asteroid
impacts. Third, I argued that due to the multidimensionality of value we may only be able
to estimate the probability of a catastrophic impact, but not its value. But where human
extinction is concerned, this may not be a significant problem. To evaluate whether
preventing human extinction would be a good thing, we must only answer a single
question: whether the continued existence of humanity would be a good thing. While
answering this question is not straightforward, many theorists are cautiously optimistic
that the future will be good (Beckstead 2013; Ord 2020; Parfit 2011).
Turn finally to the problem of option unawareness: decisionmakers are unaware of
12However, if we are pessimistic about current levels of existential risk, this point is no longer so clear
(Thorstad 2022).
17
some options which may be swamping longtermist options. But in the case of the Space
Guard Survey, we were already aware of feasible options which could produce the desired
results at a reasonable cost. It may well be true that other options, of which we were
unaware, would have been still better, but this does not mean that the options ultimately
chosen were not swamping longtermist options.
So far, we have seen that the scope-limiting factors do not threaten the case for swamp-
ing ASL in some cases, for example the decision to fund the Space Guard Survey. That
should be unsurprising: we did not expect the scope of swamping ASL to be completely
empty, and the Space Guard Survey is an example in which many decisionmakers agreed
with the longtermist’s evaluative claims. However, in many other examples the scope-
limiting factors begin to significantly threaten the case for swamping ASL. The next section
provides an illustration.
8 Beyond the good case
Let us return to Beckstead’s case of a philanthropist deciding between various initiatives
for preventing childhood blindness. We have already seen that this case is subject to
significant option unawareness, and that the presence of option unawareness tends to
reduce the plausibility of swamping ASL in this case. In the rest of this section, I suggest
that both of the remaining scope-limiting factors are also present in this case, and that
these factors further tell against the applicability of swamping ASL.
Begin with rapid diminution in the probabilities of large long-term impacts. The
argument for rapid diminution was that it is hard to make a persisting impact on the
long-term future. For example, Beckstead suggests that curing blindness may impact the
long-term future by helping treated individuals to contribute to their nation’s economic
development. But we saw in Section 4 that even large shocks, such as the detonation of
a thermonuclear bomb, are often insu cient to make lasting long-term impacts on the
economy of a medium-sized city, much less a nation. If that is right, then we should
18
substantially reduce our confidence in the ability of any single individual to make a
persisting long-term economic impact. It is true, of course, that some individuals may
occupy prominent economic roles, for example the leadership of a large corporation. But
what is less clear is that, in the absence of these individuals, underlying demographic,
cultural and economic factors would have led the region down a substantially di erent
path.
Turn next to washing out: the tendency for long-term expected benefits to be signifi-
cantly cancelled by long-term expected harms. I think that we should expect significant
washing out in this case. It is, of course, quite possible that the children we treat will go
on to fight climate change or found a world government. But it is also possible that they
will go on to be among the world’s greatest polluters, or to oppose world government.
Nor, for that matter, can we be terribly certain which of these developments would be
for the long-term good. It might be that the premature move towards world government
would lead to tyranny, or to a governance failure that would set back the development
of more e ective systems by several centuries. And for that matter, we should not be
terribly confident that blindness will be an impediment to playing an important role in
any of these endeavors. Because we have very little evidence to go on in assessing the
likelihood of various far-future e ects that may result from treating childhood blindness,
we should tend to significantly discount likely long-term benefits by leaving open the real
possibility that our actions will produce long-term harms.
The discussion of childhood blindness helps us to see how quickly the scope-limiting
factors get a take on decisionmaking, even in cases that are often taken to motivate
swamping ASL. When the scope-limiting factors are present, the case for swamping ASL
becomes much more tenuous.
19
9 Conclusion
This paper assessed the fate of ex ante swamping ASL: the claim that the ex ante best thing
we can do is often a swamping longtermist option. I argued that swamping ASL may
hold in some cases, such as the decision to fund the Space Guard Survey. However, I
also discussed three scope-limiting factors which, when present in a decision problem, tend
to reduce the prospects for swamping ASL. These scope-limiting factors included rapid
diminution in the probabilities of large far-future benefits; washing out between possible
positive and negative future e ects; and unawareness of swamping longtermist options.
I argued that swamping ASL may still be true in some cases, particularly when the
scope-limiting factors are not present. However, I suggested that the scope of swamping
ASL may be far narrower than often supposed. I used a discussion of treating childhood
blindness to illustrate how the scope-limiting factors get a take even on many cases taken
to motivate swamping ASL. I suggested that as the scope-limiting factors make themselves
increasingly felt, the prospects for swamping ASL diminish.
In some ways, this may be familiar and comforting news. For example, Hilary Greaves
(2016) considers the cluelessness problem that we are often significantly clueless about
theex ante values of our actions because we are clueless about their long-term e ects.
Greaves suggests that although cluelessness may correctly describe some complex de-
cisionmaking problems, we should not exaggerate the extent of mundane cluelessness in
everyday decisionmaking. A natural way of explaining this result would be to argue that
in most everyday decisionmaking, it is the expected long-term e ects of our actions that
are swamped by their short-term e ects, and not the other way around. This would mean
that cluelessness about long-term e ects is often compatible with substantial confidence
and precision in our views about the overall values of options.
In addition, this discussion leaves room for swamping ASL to be true and important
in some contemporary decision problems. It also does not directly pronounce on the fate
of ex-post versions of ASL, or on the fate of non-swamping ASL. However, it does suggest
20
that swamping versions of ASL may have a more limited scope than otherwise supposed.
References
Albright, Richard. 2002. “What can past technology forecasts tell us about the future?”
Technological Forecasting and Social Change 69:443–464.
Alesina, Alberto and Giuliano, Paola. 2015. “Culture and institutions.” Journal of Economic
Literature 53:898–944.
Alesina, Alberto, Giuliano, Paola, and Nunn, Nathan. 2011. “Fertility and the plough.”
American Economic Review 101:499–503.
—. 2013. “On the origins of gender roles: Women and the plough.” Quarterly Journal of
Economics 128:469–530.
Alvarez, Luis W., Alvarez, Walter, Asaro, Frank, and Michel, Helen V . 1980. “Extraterres-
trial cause for the Cretaceous-Tertiary extinction.” Science 208:1095–1180.
Beckstead, Nicholas. 2013. On the overwhelming importance of shaping the far future . Ph.D.
thesis, Rutgers University.
Benatar, David. 2006. Better never to have been: The harm of coming into existence . Oxford
University Press.
Bostrom, Nick. 2002. “Existential risks: Analyzing human extinction scenarios and related
hazards.” Journal of Evolution and Technology 9:1–30.
—. 2013. “Existential risk prevention as a global priority.” Global Policy 4:15–31.
—. 2014. Superintelligence . Oxford University Press.
Bradley, Richard. 2017. Decision theory with a human face . Cambridge University Press.
21
Cotton-Barratt, Owen. 2021. “Everyday longtermism.” EA Forum. https: //forum.
eectivealtruism.org /posts /3PmgXxBGBFMbfg4wJ /everyday-longtermism.
Cowen, Tyler. 2018. Stubborn attachments . Stripe Press.
Davis, Donald and Weinstein, David. 2008. “A search for multiple equilibria in urban
industrial structure.” Journal of Regional Science 48:29–62.
Deaton, Angus. 2015. The great escape: Health, wealth, and the origins of inequality . Princeton
University Press.
Freedman, David. 1981. “Some pitfalls in large econometric models: A case study.” Journal
of Business 54:479–500.
Fye, Shannon, Charbonneau, Steven, Hay, Jason, and Mullins, Carie. 2013. “An exami-
nation of factors a ecting accuracy in technology forecasts.” Technological Forecasting and
Social Change 80:1222–1231.
Gabaix, Xavier and Laibson, David. 2021. “Myopia and discounting.” National Bureau of
Economic Research Working Paper 23254.
GiveWell. 2021. “GiveWell’s Cost-E ectiveness Analyses.” https: //www.givewell.org /
how-we-work /our-criteria /cost-e ectiveness /cost-e ectiveness-models.
Goodwin, Paul and Wright, George. 2010. “The limits of forecasting methods in anticipat-
ing rare events.” Technological Forecasting and Social Change 77:355–368.
Greaves, Hilary. 2016. “Cluelessness.” Proceedings of the Aristotelian Society 116:311–339.
Greaves, Hilary and MacAskill, William. 2019. “The case for strong longtermism.” Global
Priorities Institute Working Paper 7-2019.
—. 2021. “The case for strong longtermism.” Global Priorities Institute Working Paper
5-2021.
22
Greaves, Hilary, Thorstad, David, and Barrett, Jacob (eds.). forthcoming. Longtermism .
Oxford University Press.
Hedden, Brian. 2012. “Options and the subjective ought.” Philosophical Studies 343–360.
Karni, Edi and Vierø, Marie-Louise. 2013. “‘Reverse Bayesianism’: A choice-based theory
of growing awareness.” American Economic Review 103:2790–2810.
Kelly, Morgan. 2019. “The standard errors of persistence.” CEPR Discussion Papers 13783.
Kott, Alexander and Perconti, Phillip. 2018. “Long-term forecasts of military technologies
for a 20-30 year horizon: An empirical assessment of accuracy.” Technological Forecasting
and Social Change 137:272–9.
Leslie, John. 1996. The end of the world: The science and ethics of human extinction . Routledge.
MacAskill, William. 2022. What we owe the future . Basic books.
Makridakis, Spyros and Taleb, Nassim. 2009. “Decision making and planning under low
levels of predictability.” International Journal of Forecasting 25:716–733.
Marchau, Vincent, Walker, Warren, Bloemen, Pieter, and Popper, Steven (eds.). 2019.
Decision making under deep uncertainty . Springer.
Miguel, Edward and Roland, G ´erard. 2011. “The long-run impact of bombing Vietnam.”
Journal of Development Economics 96:1–15.
Mogensen, Andreas. 2021. “Maximal cluelessness.” Philosophical Quarterly 71:141–62.
Mullins, Carie. 2018. “Retrospective analysis of long-term forecasts.” Technical report,
Open Philanthropy Project.
Newberry, Toby. 2021. “How cost-e ective are e orts to detect near-Earth-objects?” Tech-
nical report, Global Priorities Institute.
23
Nunn, Nathan. 2008. “The long term e ects of Africa’s slave trades.” Quarterly Journal of
Economics 123:139–176.
—. 2020. “The historical roots of economic development.” Science 367:eaaz9986.
Nunn, Nathan and Wantchekon, Leonard. 2011. “The slave trade and the origins of
mistrust in Africa.” American Economic Review 3221–3252.
Ord, Toby. 2020. The precipice . Bloomsbury.
Parente, Rick and Anderson-Parente, Janet. 2011. “A case study of long-term Delphi
accuracy.” Technological Forecasting and Social Change 78:1705–1711.
Parfit, Derek. 2011. On what matters , volume 1. Oxford University Press.
Ranger, Nicola, Reeder, Tim, and Lowe, Jason. 2013. “Addressing ‘deep’ uncertainty
over long-term climate in major infrastructure projects: Four innovations of the Thames
Estuary 2100 project.” EURO Journal on Decision Processes 1:233–262.
Risi, Joseph, Sharma, Amit, Shah, Rohan, Connelly, Matthew, and Watts, Duncan. 2019.
“Predicting history.” Nature Human Behavior 3:906–912.
Schulte, Peter et al. 2010. “The Chicxulub asteroid impact and mass extinction at the
Cretaceous-Paleogene boundary.” Science 327:1214–1218.
Schulz, Jonathan F., Bahrami-Rad, Duman, Beauchamp, Jonathan, and Henrich, Joseph.
2019. “The Church, intensive kinship, and global psychological variation.” Science
36:eaau5141.
Sevilla, Jaime. 2021. “Persistence: A critical review.” Technical report, Forethought
Foundation.
Sotala, Kaj and Gloor, Lukas. 2017. “Superintelligence as a cause or cure for risks of
astronomic su ering.” Informatica 41:389–400.
24
Steele, Katie and Stef ´ansson, Orri. 2021. Beyond uncertainty . Cambridge University Press.
Stokes, Grant, Barbee, Brent, Bottke, William, et al. 2017. “Update to determine the
feasibility of enhancing the search and characterization of NEOs: Report of the near-earth
object science definition team.” Technical report, NASA.
Thorstad, David. 2022. “Existential risk pessimism and the time of perils.” Global Priorities
Institute Working Paper 1-2022.
Torres, Phil. 2018. “Space colonization and su ering risks: Reassessing the ‘maxipok
rule’.” Futures 100:74–85.
Yusuf, Moeed. 2009. “Predicting proliferation: the history of the future of nuclear
weapons.” Technical report, Brookings Institution.
Zambrano-Marin, L.F., Howell, E.S., Devog ´ele, M., et al. 2021. “Radar observations of near-
earth asteroid 2019 OK.” In Proceedings of the 52nd Lunar and Planetary Science Conference
2021 , LPI Contribution Number 2548.
25 |
fb261ce5-5ddd-4452-87e8-eae65db49263 | StampyAI/alignment-research-dataset/arbital | Arbital | Missing the weird alternative
The "[https://arbital.com/p/47](https://arbital.com/p/47)" problem is alleged to be a foreseeable difficulty of coming up with a [good](https://arbital.com/p/3d9) goal for an [AGI](https://arbital.com/p/42g) (part of the [alignment problem](https://arbital.com/p/2v) for [advanced agents](https://arbital.com/p/2c)). Roughly, an "unforeseen maximum" happens when somebody thinks that "produce smiles" would be a great goal for an AGI, because you can produce lots of smiles by making people happy, and making people happy is good. However, while it's true that making people happy by ordinary means will produce *some* smiles, what will produce even *more* smiles is administering regular doses of heroin or turning all matter within reach into tiny molecular smileyfaces.
"Missing the weird alternative" is an attempt to [psychologize](https://arbital.com/p/43h) about why people talking about AGI utility functions might make this kind of oversight systematically. To avoid [Bulverism](https://arbital.com/p/43k), if you're not yet convinced that missing a weird alternative *would* be a dangerous oversight, please read [https://arbital.com/p/47](https://arbital.com/p/47) first or instead.
In what follows we'll use $U$ to denote a proposed utility function for an AGI, $V$ to denote our own [normative values](https://arbital.com/p/55), $\pi_1$ to denote the high-$V$ policy that somebody thinks is the attainable maximum of $U,$ and $\pi_0$ to denote what somebody else suggests is a higher-$U$ lower-$V$ alternative.
# Alleged historical cases
Some historical instances of AGI goal systems proposed in a publication or conference presentation, that have been argued to be "missing the weird alternative" are:
- "Just program AIs to maximize their gains in compression of sensory data." Proposed by Juergen Schmidhuber, director of IDSIA, in a presentation at the 2009 Singularity Summit; see the entry on [https://arbital.com/p/47](https://arbital.com/p/47).
- Claimed by Schmidhuber to motivate art and science.
- Yudkowsky suggested that this would, e.g., motivate the AI to construct objects that encrypted streams of 1s or 0s, then revealed the encryption key to the AI.
- Program an AI by showing it pictures/video of smiling faces to train (via supervised learning) which sensory events indicate good outcomes. Formally proposed twice, once by J. Storrs Hall in the book *Beyond AI,* once in an ACM paper by somebody who since exercised their [sovereign right to change their mind](https://arbital.com/p/43l).
- Claimed to motivate an AI to make people happy.
- Suggested by Yudkowsky to motivate tiling the universe with tiny molecular smileyfaces.
Many other instances of this alleged issue have allegedly been spotted in more informal dicussion.
# Psychologized reasons to miss a weird alternative
[Psychologizing](https://arbital.com/p/43h) some possible reasons why some people might systematically "miss the weird alternative", assuming that was actually happening:
## Our brain doesn't bother searching V-bad parts of policy space
Arguendo: The human brain is built to implicitly search for high-$V$ ways to accomplish a goal. Or not actually high-$V$, but high-$W$ where $W$ is what we intuitively want, which [has something to do with](https://arbital.com/p/313) $V.$ "Tile the universe with tiny smiley-faces" is low-$W$ so doesn't get considered.
Arguendo, your brain is built to search for policies *it* prefers. If you were looking for a way to open a stuck jar, your brain wouldn't generate the option of detonating a stick of dynamite, because that would be a policy ranked very low in your preference-ordering. So what's the point of searching that part of the policy space?
This argument seems to [prove too much](https://arbital.com/p/3tc) in that it suggests that a chess player would be unable to search for their opponent's most preferred moves, if human brains could only search for policies that were high inside their own preference ordering. But there could be an explicit perspective-taking operation required, and somebody modeling an AI they had warm feelings about might fail to fully take the AI's perspective; that is, they fail to carry out an explicit cognitive step needed to switch off the "only $W$-good policies" filter.
We might also have a *limited* native ability to take perspectives on goals not our own. I.e., without further training, our brain can readily imagine that a chess opponent wants us to lose, or imagine that an AI wants to kill us because it hates us, and consider "reasonable" policy options along those lines. But this expanded policy search still fails to consider policies on the lines of "turn everything into tiny smileyfaces" when asking for ways to produce smiles, because *nobody* in the ancestral environment would have wanted that option and so our brain has a hard time natively modeling it.
## Our brain doesn't automatically search weird parts of policy space
Arguendo: The human brain doesn't search "weird" (generalization-violating) parts of the policy space without an explicit effort.
The potential issue here is that "tile the galaxy with tiny smileyfaces" or "build environmental objects that encrypt streams of 1s or 0s, then reveal secrets" would be *weird* in the sense of violating generalizations that usually hold about policies or consequences in human experience. Not generalizations like, "nobody wants smiles smaller than an inch", but rather, "most problems are not solved with tiny molecular things".
[https://arbital.com/p/2w](https://arbital.com/p/2w) would tend to push the maximum (attainable optimum) of $U$ in "weird" or "extreme" directions - e.g., the *most* smiles can be obtained by making them very small, if this variable is not otherwise constrained. So the unforeseen maxima might tend to violate implicit generalizations that usually govern most goals or policies and that our brains take for granted. Aka, the unforeseen maximum isn't considered/generated by the policy search, because it's weird.
## Conflating the helpful with the optimal
Arguendo: Someone might simply get as far as "$\pi_1$ increases $U$" and then stop there and conclude that a $U$-agent does $\pi_1.$
That is, they might just not realize that the argument "an advanced agent optimizing $U$ will execute policy $\pi_1$" requires "$\pi_1$ is the best way to optimize $U$" and not just "ceteris paribus, doing $\pi_1$ is better for $U$ than doing nothing". So they don't realize that establishing "a $U$-agent does $\pi_1$" requires establishing that no other $\pi_k$ produces higher expected $U$. So they just never search for a $\pi_k$ like that.
They might also be implicitly modeling $U$-agents as only weakly optimizing $U$, and hence not seeing a $U$-agent as facing tradeoffs or opportunity costs; that is, they implicitly model a $U$-agent as having no desire to produce any more $U$ than $\pi_1$ produces. Again psychologizing, it does sometimes seem like people try to mentally model a $U$-agent as "an agent that sorta wants to produce some $U$ as a hobby, so long as nothing more important comes along" rather "an agent whose action-selection criterion entirely consists of doing whatever action is expected to lead to the highest $U$".
This would well-reflect the alleged observation that people allegedly "overlooking the weird alternative" seem more like they failed to search at all, than like they conducted a search but couldn't think of anything.
## Political persuasion instincts on convenient instrumental strategies
If the above hypothetical was true - that people just hadn't thought of the possibility of higher-$U$ $\pi_k$ existing - then we'd expect them to quickly change their minds upon this being pointed out. Actually, it's been empirically observed that there seems to be a lot more resistance than this.
One possible force that could produce resistance to the observation "$\pi_0$ produces more $U$" - over and above the null hypothesis of ordinary pushback in argument, admittedly sometimes a very powerful force on its own - might be a brain running in a mode of "persuade another agent to execute a strategy $\pi$ which is convenient to me, by arguing to the agent that $\pi$ best serves the agent's own goals". E.g. if you want to persuade your boss to give you a raise, one would be wise to argue "you should give me a raise because it will make this project more efficient" rather than "you should give me a raise because I like money". By the general schema of the political brain, we'd be very likely to have built-in support for searching for arguments that policy $\pi$ that we just happen to like, is a *great* way to achieve somebody else's goal $U.$
Then on the same schema, a competing policy $\pi_0$ which is *better* at achieving the other agent's $U$, but less convenient for us than $\pi_1$, is an "enemy soldier" in the political debate. We'll automatically search for reasons why $\pi_0$ is actually really bad for $U$ and $\pi_1$ is actually really good, and feel an instinctive dislike of $\pi_0.$ By the standard schema on the self-deceptive brain, we'd probably convince ourselves that $\pi_0$ is really bad for $U$ and $\pi_1$ is really best for $U.$ It would not be advantageous to our persuasion to go around noticing ourselves all the reasons that $\pi_0$ is good for $U.$ And we definitely wouldn't start spontaneously searching for $\pi_k$ that are $U$-better than $\pi_1,$ once we'd already found some $\pi_1$ that was very convenient to us.
(For a general post on the "fear of third alternatives", see [here](http://lesswrong.com/lw/hu/the_third_alternative/). This essay also suggests that a good test for whether you might be suffering from "fear of third alternatives" is to ask yourself whether you instinctively dislike or automatically feel skeptical of any proposed other options for achieving the stated criterion.)
## The [apple pie problem](https://arbital.com/p/apple_pie_problem)
Sometimes people propose that the only utility function an AGI needs is $U$, where $U$ is something very good, like democracy or freedom or [apple pie](https://arbital.com/p/apple_pie_problem).
In this case, perhaps it sounds like a good thing to say about $U$ that it is the only utility function an AGI needs; and refusing to agree with this is *not* praising $U$ as highly as possible, hence an enemy soldier against $U.$
Or: The speaker may not realize that "$U$ is really quite amazingly fantastically good" is not the same proposition as "an agent that maximizes $U$ and nothing else is [beneficial](https://arbital.com/p/3d9)", so they treat contradictions of the second statement as though they contradicted the first.
Or: Pointing out that $\pi_0$ is high-$U$ but low-$V$ may sound like an argument against $U,$ rather than an observation that apple pie is not the only good. "A universe filled with nothing but apple pie has low value" is not the same statement as "apple pie is bad and should not be in our utility function".
If the "apple pie problem" is real, it seems likely to implicitly rely on or interact with some of the other alleged problems. For example, someone may not realize that their own complex values $W$ contain a number of implicit filters $F_1, F_2$ which act to filter out $V$-bad ways of achieving $U,$ because they themselves are implicitly searching only for high-$W$ ways of achieving $U.$ |
c07288c8-f8d0-4d7e-bdc1-c552ebb56a1b | StampyAI/alignment-research-dataset/lesswrong | LessWrong | How will they feed us
I was wondering how in more optimistic future scenarios people would be fed or get money to buy food et cetera. A common topic in discussions between pro and anti doomers is how the AI will kill us. It is a fair point, killing 8 billion people seems difficult at first glance. But what if the AI does not kill us, how will we eat? Our current agro-industrial food supply is very fragile. And feeding 8 billion people takes a lot of resources. So in the utopian view that question needs answering. There are several proposals.
**Universal basic income**
The AI would do most or all of the work so there is not much to do for humans. Therefore we build a welfare system with a universal basic income or some variant of that. It seems like a rather fragile system, because in this situation humans would have no economic value and so very little bargaining power. There is a great dependency of the AI’s and very little control in the hands of humanity.
One threat to this model could be that it would be very hard to find a broad enough tax base because of tax evasion. Especially in the early stages of UBI companies could go abroad to countries that are more tax-friendly. Or they could just find loopholes and don’t pay taxes (like the system we have today). In the current system taxes are mostly paid by the working middle and upper class (taxing labour is a lot easier than taxing profits or wealth). The wealthy people and the bigger companies are very hard to tax. This is an open problem that needs to be solved. Just stating that we will tax companies that use neural networks or something like that is not a solution. Companies are superintelligent at tax-evasion. I cannot tell you how they won’t pay the tax, but they probably won’t. Unlike the speculative how will a superintelligence kill us, there are a lot of examples here. The default is “pay less or no taxes”, despite large efforts to make companies pay their taxes.
**Humans would still have jobs**
As things are going cognitive work is rapidly being automated and there is still enough physical labour to be done. So maybe there are enough jobs. This might be enough to provide enough value and earn enough money at a subsistence level or above. This option does not get a lot of attention in utopian scenario’s because it is not very interesting. But we might have food.
Bargaining power for higher wages would probably be very low because high value tasks are done by AI and a few select humans. So this scenario is probably pretty dystopian. And a very possible threat is that eventually all the jobs will get automated anyhow. And than we are back in a scenario of great dependency and fragility.
A more dystopian variant would be that a few humans that provide useful work will be provided with food and shelter. Maybe because robots are more expensive to operate than human workers.
**Self-sufficiency**
I had to add this to be complete in my overview of ways to provide for food. When you do not have a job and do not receive any welfare there is the option to grow your own food. Productivity will be very low. Industrial agriculture increased productivity by a factor of 5 to 10 per acre so it would still mean that a lot of people would have no food. Obviously a dystopian scenario and long term future is bleak. When AI overlords don’t kill and don’t bother with us, eventually the land will be used by the AI (e.g. use the biomass for energy). Or because the AI’s don’t need an ecosystem pollution and environmental degradation will threaten human habitat.
**The AI overlords will provide**
Let’s get more creative. No capitalism, taxes and stuff like that like in the UBI system. The AI’s will produce and provide. A vulnerability is that to feed 8 billion people you need a lot of resources and that would be a great sacrifice for the AI overlords. We do not need an alignment of the “don’t kill the humans” type. We need alignment in the sense that the AI would sacrifice valuable energy and resources for humanity (that has no productive value anymore).
The AI would need a strong intrinsic motivation to keep humans alive. This would set the bar for the alignment problem very high. Only intrinsic motivation would work. The motivation needs to be so deep that the AI will be motivated to make sub-agents that retain this motivation to keep humans alive.
Not killing people as a motivation for the AI or cause no harm would not suffice. The AI would need to be motivated to feed humanity at scale. And its improved sub-agents in a few hundred years would need to retain this motivation.
**Humans will be replaced by AI**
I am amazed by some of the arguments made in the anti-doomer camp. An example is the discussion on the Bankless podcast. First Eliezer is interviewed and does his thing and scares the shit out of the hosts (credit to the hosts, because they really listened to Eliezer so they were pretty shaken by that). Then they invited Robin Hanson and he disagrees with Eliezer. To me the Hanson version of the future is still dystopian and I think most people would feel the same way if they took the time to study his views. It is not that I think he is wrong in the sense that it might play out as he says it will. I do not agree that this is a good future.
In his view humans will be replaced by AI and it will create a more interesting world than humanity did. Although this is not called doomerism this scenario sounds pretty dystopian to me. The question how will they feed us gets answered, they don’t feed us.
There probably isn’t a really good logical argument to say “humanity not die” is good. Call it an axiom or just a feeling or call me just old-fashioned, but I think humanity should not die. If we disagree on that, so be it. Hanson has very good points, but I think humanity should live.
**Simulation and uploads**
And there is of course the possibility to simulate humans instead of feeding them. Could be a very efficient solution for the aligned AI that has the task to make humanity prosper. Obviously humans will have almost no control in that situation. A threat to this situation is that it takes a lot of energy resources to upkeep this simulated humanity. The AI’s will need to see a benefit to sacrifice all these energy resources to stimulate people. A very strong alignment is thus needed. Not the type “not kill humans” alignment.
A world that has simulated humans poses a high existential risk in itself at any time the overlords could just cut the power. Again this is a big ask for alignment and goes a lot further than just not kill humans.
**The X-risk remains**
In these scenario’s the existential risk remains. And because the solutions are fragile the risk is significant. At least higher than 10% per century. Which spells doom in the long run. In a lot of the more optimistic scenario’s I see a lot of fragility and rising X-risk. I like the honesty of the Hanson argument. He admits that in the future there will probably be no humans and he is ok with that.
Furthermore I think the “not kill humans” alignment is not enough. That is an argument made often by the optimists. I have a hard time understanding how our current agricultural industry can coexist with an AI that does not care about humanity and does not kill humanity. In these scenario’s we might not die immediately, but existential risks rise.
**Conclusion**
Feeding the world is a hard problem that takes a lot of resources. And it will probably get harder with a degrading environment. Humans that provide no economic value will have a hard time getting the resources that they need if the world stays anything like we have seen for the past 10.000 years. There is no precedent for providing for people that provide no economic value at scale. It is hard for me to write this and I am very sorry to maybe offend the people that are in a dependant situation. Providing for people that cannot provide for themselves is a good thing in my view, but to do this at scale will be very difficult and is unprecedented. We humans are often good people and we therefore often provide for the people that cannot provide for themselves. Creating a system that will do the same seems very hard.
Defending a system that will provide for us all needs very strong proof. And thus it shifts the burden of proof. Just like something killing all humanity is unprecedented and therefore requires strong proof.
This argument is far from complete. My goal is to view the problem from a different perspective (where the burden of proof is different). It also invites you to explore other “optimistic” ideas and see how humanity will survive in those optimistic scenarios. Is there food and is the X-risk decreasing. As you will start to explore these optimistic scenarios and if you are like me then you will probably find them often times quite dystopian.
*p.s. not a native speaker and did not use AI to improve the text, hope you still get the gist of it* |
97d31f68-786c-456f-9618-8aa075314b0d | trentmkelly/LessWrong-43k | LessWrong | Are Speed Superintelligences Feasible for Modern ML Techniques?
Disclaimer
I am very ignorant about machine learning.
Introduction
I've frequently heard suggestions that a superintelligence could dominate humans by thinking a thousand or million times faster than a human. Is this actually a feasible outcome for prosaic ML systems?
Why I Doubt Speed Superintelligence
One reason I think this might not be the case is that the "superpower" of speed superintelligences is faster serial thought. However, I'm under the impression that we're already running into fundamental limits to the serial processing speed and can't really make them go much faster:
> In 2002, an Intel Pentium 4 model was introduced as the first CPU with a clock rate of 3 GHz (three billion cycles per second corresponding to ~ 0.33 nanoseconds per cycle). Since then, the clock rate of production processors has increased much more slowly, with performance improvements coming from other design changes.
>
> Set in 2011, the Guinness World Record for the highest CPU clock rate is 8.42938 GHz with an overclocked AMD FX-8150 Bulldozer-based chip in an LHe/LN2 cryobath, 5 GHz on air.[4][5] This is surpassed by the CPU-Z overclocking record for the highest CPU clock rate at 8.79433 GHz with an AMD FX-8350 Piledriver-based chip bathed in LN2, achieved in November 2012.[6][7] It is also surpassed by the slightly slower AMD FX-8370 overclocked to 8.72 GHz which tops of the HWBOT frequency rankings.[8][9]
>
> The highest base clock rate on a production processor is the IBM zEC12, clocked at 5.5 GHz, which was released in August 2012.
Of course the "clock rate" of the human brain is much slower, but it's not like ML models are ever going to run on processors with significantly faster clock rates. Even in 2062, we probably will not have any production processors with > 50 GHz base clock rate (it may well be considerably slower). Rising compute availability for ML will continue to be driven by parallel processing techniques.
GPT-30 would not have considerably faste |
e8057f9f-b6ec-4bc3-979c-53ff87616e9e | trentmkelly/LessWrong-43k | LessWrong | “The Wisdom of the Lazy Teacher”
> If you teach a man to fish, he’ll be eating fish for a lifetime. Teach him to teach himself to fish, and you’ve given him the fundamental life skill of figuring things out on his own.
— Wylie Beckert |
84b4d01a-f590-4331-af91-acaf452ae995 | trentmkelly/LessWrong-43k | LessWrong | Solving the two envelopes problem
Suppose you are presented with a game. You are given a red and a blue envelope with some money in each. You are allowed to ask an independent party to open both envelopes, and tell you the ratio of blue:red amounts (but not the actual amounts). If you do, the game master replaces the envelopes, and the amounts inside are chosen by him using the same algorithm as before.
You ask the independent observer to check the amounts a million times, and find that half the time the ratio is 2 (blue has twice as much as red), and half the time it's 0.5 (red has twice as much as blue). At this point, the game master discloses that in fact, the way he chooses the amounts mathematically guarantees that these probabilities hold.
Which envelope should you pick to maximize your expected wealth?
It may seem surprising, but with this set-up, the game master can choose to make either red or blue have a higher expected amount of money in it, or make the two the same. Asking the independent party as described above will not help you establish which is which. This is the surprising part and is, in my opinion, the crux of the two envelopes problem.
This is not quite how the two envelopes problem is usually presented, but this is the presentation I arrived at after contemplating the original puzzle. The original puzzle prescribes a specific strategy that the game master follows, makes the envelopes indistinguishable, and provides a paradoxical argument which is obviously false, but it's not so obvious where it goes wrong.
Note that for simplicity, let's assume that money is a real quantity and can be subdivided indefinitely. This avoids the problem of odd amounts like $1.03 not being exactly divisible by two.
The flawed argument
The flawed argument goes as follows. Let's call the amount in the blue envelope B, and in red R. You have confirmed that half the time, B is equal to 2R, and half the time it's R/2. This is a fact. Surely then the expected value of B is (2R * 50% + R/2 * 50% |
dc93b784-781e-496c-a07f-9cda6c416c0a | trentmkelly/LessWrong-43k | LessWrong | [About Me] Cinera's Home Page
I aspire to become an alignment theorist — all other details are superfluous — I leave them here anyway for historical purposes.
----------------------------------------
Introduction
I have a set of questions I habitually ask online acquaintances that pique my interest/who I want to get to know better. Many want to know my answer to those same questions.
It would be nice to have a central repository introducing myself that I can keep up to date.
----------------------------------------
Questions
A. What do you care about?
* What are you passionate about?
* What animates you?
B. What do you think is important?
C. What do you want/hope to do with your life?
D. What do you want/hope out of life?
E. Where are you coming from?
* Context on who you are/what made you the person you are today.
F. How do you spend your time?
* Work, volunteer, education, etc.
* Basically, activities that aren't primarily for leisure/pleasure purposes.
G. What do you do for recreation/leisure/pleasure/fun time?
* What are your hobbies?
----------------------------------------
Answers
A.
> What do you care about?
>
> * What are you passionate about?
> * What animates you?
I care about creating a brighter future for humanity. I believe a world far better than any known to man is possible, and I am willing to fight for it.
I want humanity to be fucking awesome. To take dominion over the natural world and remake it in our own image, better configured to serve our values.
I want us to be as gods.
I outlined what godhood means for me here.
I think that vision is largely what drives me, what pushes me forward and keeps me going.
B.
> B. What do you think is important?
Mitigating Existential Risk/Pursuing Existential Security
The obvious reasons are obvious.
But I am personally swayed by astronomical waste. I don't want us to squander our cosmic endowment. Especially because our future can be so wonderful, I think it would be very sad if we never re |
6ca0d187-72fb-462c-8d43-1744755b1e00 | trentmkelly/LessWrong-43k | LessWrong | Fermi Paradox: Iron Age Milky Way
I have some observations about pre-industrial civilization and their relationship to why we don't see evidence of aliens. These observations, which in my opinion were consistent across human history despite very diverse cultures, can apply even to alien cultures.
Because this Great Filter isn't biological, temporal, or even technological -- it's structural and inherent to how civilizations develop in the first place.
The road from Stone Age to Iron Age seems pretty obvious. Humans have independently experienced this phenomenon several times in history. So is the road from Industrial Age to Information Age; not as obvious or dramatic, but going from steamships to spaceships seems to be a pretty inevitable progression, done by polities of varying political and economic situations.
The problem is that event between Stone Age and Information Age, the Industrial Age. Unlike pre-industrialization or post-industrialization societies, which seem to come about in all sorts of political conditions, industrialization only came from a very specific set of circumstances that may be impossible to replicate on other planets.
The Industrial Revolution was not kicked off just from having a handful or even plethora of inventions. We had the technological powder keg for the Industrial Revolution for centuries, what was needed was a political situation that allowed society to use the pieces in a way to change our mode of production. Because the lead-up to the Industrial Revolution saw some deeply, deeply profound changes that I think were both necessary and also very unlikely. And not just improbable, but 'straight-up most civilizations will never have the prerequisites, because the path to Iron Age civilization opposes the path to industrialization'.
Any pre-industrial civilization of sufficient size relative to its neighbors was an autocracy, and even the smaller ones such as the North American Confederations and the Swiss Republic had more autocratic than democratic elements |
1b115272-dba9-4b16-b5ec-69bf301c57c0 | StampyAI/alignment-research-dataset/special_docs | Other | A Strategic Analysis of Portfolio Compression.
A Strategic Analysis of Portfolio Compression
Extended Abstract
Katherine Mayo
University of Michigan
kamayo@umich.eduMichael P. Wellman
University of Michigan
wellman@umich.edu
ABSTRACT
We analyze portfolio compression, the netting of cycles in a finan-
cial network, as a strategic decision made by firms within a debt
network. We define a network game in which firms have only lo-
cal information and ask what criteria the firms should consider
in their decision to compress. We propose a variety of heuristic
strategies and evaluate them using agent-based simulation and
empirical game-theoretic analysis. Our results show that some sim-
ple strategies based on local information perform better than the
unconditional strategies of always agreeing or disagreeing to a
compression and that when the decision is made strategically, the
price of anarchy is always high.
KEYWORDS
financial network; portfolio compression; game theory
ACM Reference Format:
Katherine Mayo and Michael P. Wellman. 2021. A Strategic Analysis of
Portfolio Compression: Extended Abstract. In Proc. of the 20th International
Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021),
Online, May 3–7, 2021, IFAAMAS, 3 pages.
1 INTRODUCTION
The idea of portfolio compression is to simplify a web of financial
obligations by canceling a cycle of debt, leaving each party’s net
position the same. Consider Figure 1 where financial institution A
owes money to institution B, B owes a like amount to institution
C, and C owes the same amount to A and all debts have the same
interest rates and maturity dates. When such a cycle is identified
the institutions on the cycle must make the compression decision,
deciding to accept or reject the canceling of debts. Intuitively, can-
celing the three debts leaves all parties in the same net position,
while simplifying their balance sheets. If all are solvent and able to
pay the debts when due, the operation of compressing portfolios by
canceling the debts has no effect and the compression decision is of
little interest. With risk of default, however, portfolio compression
is not generally neutral and may affect stability in the network.
Prior analyses of compression have determined that preference
for compression among individuals in the network is connected
to characteristics of the network. In some cases, compression of a
cycle can benefit financial institutions because it limits financial
contagion by removing paths of default propagation. On the other
hand, portfolio compression can also limit the ability for parts of
the network to absorb losses from an insolvent institution, allowing
contagion to spread that may have been avoided [2, 4].
Proc. of the 20th International Conference on Autonomous Agents and Multiagent Systems
(AAMAS 2021), U. Endriss, A. Nowé, F. Dignum, A. Lomuscio (eds.), May 3–7, 2021, Online.
©2021 International Foundation for Autonomous Agents and Multiagent Systems
(www.ifaamas.org). All rights reserved.
Figure 1: A simple example of a debt cycle.
While these analyses offer us a look at what aspects of the net-
work play an important role in the effects of compression, they are
generally conducted with a global view of the network and where
details of the external shock are known. In practice, the compres-
sion decisions are made by institutions with a limited view of the
network and prior to resolution of uncertain financial events. Eval-
uation of compression decisions, therefore, necessitates a model
accounting for uncertainty and incomplete information on the state
of the network. Further, prediction of what these firms would de-
cide should also consider strategic effects, recognizing that optimal
policies generally depend on how the other firms make their deci-
sions.
To analyze the broader context of compression decisions, we
define a one-shot game on a financial network, with nodes repre-
senting financial institutions connected by directed edges represent-
ing debt obligations. Nodes make a strategic decision to accept or
decline a proposed compression using a strategy from a set of heuris-
tic strategies we propose, which may provide useful indications of
potential negative effects of compression to the voting node. To
evaluate these strategies, we randomly generate networks with debt
cycles, and employ agent-based simulation to determine compres-
sion decisions and outcomes at different insolvent node recovery
rates. Following the methodology of empirical game-theoretic anal-
ysis(EGTA) [ 3,5], we identify Nash equilibria among the heuristic
strategies.
Within this environment, we have found evidence that for an
individual node, the optimal policy is generally conditional on
features of the local network and the individual agent’s balance
sheet. When the compression decision is made strategically by the
nodes, we find that the price of anarchy is high for all recovery
rates.
2 COMPRESSION GAME
The game starts with a set of 𝑁={1,...,𝑛}nodes each represent-
ing a financial institution to which 𝑛agents are randomly assigned.
Each node is also given a random endowment of external assets,
𝑒𝑖∼𝑈[10,20].
Extended Abstract
AAMAS 2021, May 3-7, 2021, Online
1599
We model a liability as a directed edge 𝑙𝑖 𝑗of value𝑣representing
𝑣units owed by node 𝑁𝑖to node𝑁𝑗. The same edge represents an
internal asset for node𝑁𝑗. We refer to a node 𝑖’stotal liabilities,
𝐿𝑖as the sum of the values on its liability edges. And define the
sum of values on node 𝑖’s internal assets in addition to its external
asset holdings as the node’s total assets,𝐴𝑖.Insolvency can then be
formally defined for node 𝑖as the case where 𝐴𝑖−𝐿𝑖<0.
Liability edges are randomly added to the network, creating a set
𝐸of network edges, until the network contains at least one cycle
𝐶=(𝑁𝑐,𝐸𝑐), where𝑁𝑐⊆𝑁and𝐸𝑐⊆𝐸, of three or more edges.
Note that at this point, all nodes in the network remain solvent.
The game proceeds with a compression vote where agents on a
cycle independently cast their vote for the proposed compression
using their selected strategy from the strategy set. A compression
is performed only if all 𝑁𝑐nodes vote in favor of the compression.
We use𝜇=𝑚𝑖𝑛𝑖,𝑗∈𝑁𝑐,𝑖≠𝑗𝑙𝑖 𝑗to denote the smallest liability on the
cycle. Then a compression of cycle𝐶is defined by the revised set of
liabilities𝑙𝑐
𝑖 𝑗for edges on the cycle:
∀𝑙𝑖 𝑗∈𝐸𝑐𝑙𝑐
𝑖 𝑗=𝑙𝑖 𝑗−𝜇.
An example of a potential cycle and its subsequent compression
can be seen in Figure 2.
Figure 2: A simple debt cycle before and after compression
by two units.
We force insolvency in the network by a shock to external assets,
modeled as the failed realization of outside investments. Nodes on
a cycle are randomly assigned to invest their remaining external
assets in one of two investments with equal probability. All nodes
not on a cycle are randomly assigned to either invest their remain-
ing external assets in one of two investment or to not make an
investment, with equal probability. One of the investments will be
randomly chosen to fail and nodes assigned in it will not recoup
any of their investment.
We then make all nodes pay off their debts, using the greatest
clearing payment algorithm by Veraart [4]to calculate the payments
for each node the resolve the network. At the end of the game, the
payoff to the agent assigned to node 𝑁𝑖is the final equity 𝐴𝑖−𝐿𝑖.
2.1 Strategies
Agents may choose from among a set of heuristic strategies to make
the compression decision. Each strategy emphasizes a different
piece of information available to the node making the decision
including assets, liabilities, and cycle’s 𝜇-value. We do not claim that
our collection of strategies is exhaustive, but rather that they cover a
variety of strategically relevant features. The financial institutions
we are modeling may always accept a potential compression to
comply with regulation or out of a desire to simplify balance sheets,
so we include in our analysis the unconditional acceptance strategy
YES and its complement the unconditional reject strategy NO.3 ANALYSIS
We analyze the compression games by extensive simulation of se-
lected strategy profiles through a process known as EGTA. We
follow an iterative procedure for selecting profiles to simulate,
geared toward identifying symmetric mixed-strategy Nash equilib-
ria, using methods similar to those employed in prior EGTA studies
[1,6]. For some of our games instances, we also apply a method
called deviation-preserving reduction (DPR), which approximates a
many-player game by one with fewer players [7].
Through this approach, we obtain approximate Nash equilibria
for six𝑛=10player games, defined by varying settings of the re-
covery rate: 𝛼∈{0,0.1,0.3,0.5,0.7,1.0}. Games with 𝛼∈{0.3,0.7}
were reduced from 10-player to 4-player games using DPR.
To analyze the effects of a strategic compression decision on
the network, we use the price of anarchy using total equity as the
measure of social welfare. Total equity is defined as:
𝐸=Õ
𝑖[𝑒𝑖+Õ
𝑗𝑙𝑗𝑖−𝐿𝑖].
We run the compression game 10,000 times, comparing for each
network the outcome with a strategic compression decision made
with the Nash equilibria to the outcome under the optimal decision
for the network. Then we can define the price of anarchy, 𝑃, as:
𝑃=𝐸𝑠
𝑚𝑎𝑥{𝐸𝑐,𝐸𝑛𝑐}
where𝐸𝑠is the total equity in the network where the compression
decision comes from a strategic vote, 𝐸𝑐is the total equity in the
network where the cycle is compressed, and 𝐸𝑛𝑐is the total equity
in the network without compression.
4 PRELIMINARY RESULTS
We find that strategies employing simple, local network informa-
tion are preferred to the unconditional strategies by nodes making
the compression decision. The only exception is when 𝛼=0and
the pure Nash equilibrium is YES. The results of the price of an-
archy experiment shows that the average price of anarchy is high
and close to 1 for all 𝛼values. Thus, while self-interested decision
makers will not choose the social optimum, the cost to the network
of a strategic compression decision is not high.
5 CONCLUSION
Our study provides a first look at the strategic decision facing nodes
in a financial network presented with compression opportunities.
We focus on the implications for financial contagion, anticipating
how a compression may insulate or expose an institution to the
effects of an asset shock. Taking a perspective ex ante to the poten-
tial shock, we ask what types of strategies nodes with imperfect
information should use to make such a decision.
From our results, we can confirm that adopting a heuristic strat-
egy based on privately held information will often be the better
response for an individual node in the network. When we compare
the choice made by strategic choice to the optimal decision, we
see that the price of anarchy remains high. Therefore, allowing
compression to be a decision made strategically by nodes in the
network will not necessarily always negatively affect systemic risk.
Extended Abstract
AAMAS 2021, May 3-7, 2021, Online
1600
REFERENCES
[1]Ben-Alexander Cassell and Michael P. Wellman. 2013. EGTAOnline: An experiment
manager for simulation-based game studies. In Multi-Agent Based Simulation XIII.
Lecture Notes in Artificial Intelligence, Vol. 7838. Springer, 85–100.
[2]Steffen Schuldenzucker and Sven Seuken. 2020. Portfolio Compression in Financial
Networks: Incentives and Systemic Risk. (Aug. 2020). https://ssrn.com/abstract=
3135960 Working Paper.
[3]Karl Tuyls, Julien Perolat, Marc Lanctot, Edward Hughes, Richard Everett, Joel Z.
Leibo, Csaba Szepesvári, and Thore Graepel. 2020. Bounds and dynamics for
empirical game-theoretic analysis. Autonomous Agents and Multi-Agent Systems
34, 7 (2020).[4]Luitgard A. M. Veraart. 2019. When does portfolio compression reduce systemic
risk? (Jan. 2019). https://ssrn.com/abstract=3311176 Working Paper.
[5]Michael P. Wellman. 2016. Putting the agent in agent-based modeling. Autonomous
Agents and Multi-Agent Systems 30 (2016), 1175–1189.
[6]Michael P. Wellman, Tae Hyung Kim, and Quang Duong. 2013. Analyzing Incen-
tives for Protocol Compliance in Complex Domains: A Case Study of Introduction-
Based Routing. In Twelfth Workshop on the Economics of Information Security
(Washington).
[7]Bryce Wiedenbeck and Michael P. Wellman. 2012. Scaling simulation-based game
analysis through deviation-preserving reduction. In 11th International Conference
on Autonomous Agents and Multiagent Systems (Valencia). 931–938.
Extended Abstract
AAMAS 2021, May 3-7, 2021, Online
1601 |
3c512a02-604e-407a-9a56-cb1b73d46499 | trentmkelly/LessWrong-43k | LessWrong | Human preferences as RL critic values - implications for alignment
TLDR: Human preferences might be largely the result of a critic network head, much like that used in SOTA agentic RL systems. The term "values" in humans might mean almost exactly what it does in RL: an estimate of discounted sum of future rewards. In humans this is based on better and more abstract representations than current RL systems.
Work on aligning RL systems often doesn't address the critic system as distinct from the actor. But using systems with a critic head may provide a much simpler interface for interpreting and directly editing the system's values and, therefore, its goals and behavior. In addition, including a powerful critic system may be advantageous for capabilities as well.
One way to frame this is that human behavior, and therefore a neuromorphic AGI, might well be governed primarily by a critic system, and that's simpler to align than understanding the complex mess of representations and action habits in the remainder of the system.
Readers will hopefully be familiar with Steve Byrnes' sequence intro to brain-like AGI safety. What I present here seems to be entirely consistent with his theories. Post 14 of that sequence, his recent elaboration, and other recent work[1] present similar ideas. We use different terminology[2] and explanatory strategies, and I focus more on specifics of the critic system, so hopefully the two explanations are not redundant.
The payoff: a handle for alignment.
Wouldn't it be nice if the behavior of a system were governed by one system, and it provided an easily trainable (or even hand-editable) set of weights? And if that system had a clear readout, meaning something like "I'll pursue what I'm currently thinking about, as a goal, with priority 0-1 in this context"?
Suppose we had a proto-AGI system including a component with the above properties. That subsystem is what I'm terming the critic. Now suppose further that this system is relatively well-trained but is (by good planning and good fortune) still under |
5e2c535e-4d80-4702-9a0d-64d16ef8d05e | trentmkelly/LessWrong-43k | LessWrong | GPT-8 may not be ASI
Imagine the gap between GPT-2 and GPT-3. Imagine the gap between GPT-3 and GPT-4. Let's suppose GPT-5 has the same gap with GPT-4, and GPT-6 has the same gap with GPT-5, and so on till GPT-8.
It is still entirely possible that GPT-8 is not better than the best of humanity at all possible tasks. There can still exist multiple tasks at which the best humans beat GPT-8.
One way to build this intuition is the following thought experiment: Try imagining a GPT-5 that makes headway in GPT4's weaknesses, but has its own weaknesses. Try imagining a GPT-6 that makes headway in GPT-5's weaknesses, but has its own weaknesses. Iterate till you get GPT-8.
Here's one run of this thought experiment:
My thought experiments focusses on mathematics, but you could also pick a natural language example and run your own thought experiment instead.
GPT4 has high error rate per reasoning step, and sequentially chaining many such reasoning steps blows up error to 1. GPT4 cannot accurately answer square root of 1234.
Let's suppose GPT-5 can look at sequentially chained reasoning steps that are likely to occur in training, and for those chains it is able to perform the entire chain with low error. But if it needs to join separate reasoning steps in a novel manner, it again has high error. GPT5 answers square root of 1234 correctly because it has seen previous square root examples and internalises all the steps of the square root algorithm (in a way that doesn't add up error with each step). But it cannot answer (1234 + 5678^(1/2))^(1/3).
Let's suppose GPT-6 can do the same thing that GPT-5 does, but first it can transform the input space into a variety of other spaces. For instance it can transform numbers into a space that naturally does binomial expansions (including of fractional exponents) and it can transform symbols into a space that naturally does arithmetic using a FIFO stack and it can transform symbols into a space that does first-order logic. Let's suppose however that GP |
3c99004e-fae3-4a56-8c42-1fa027f75c53 | trentmkelly/LessWrong-43k | LessWrong | Feature-Based Analysis of Safety-Relevant Multi-Agent Behavior
Introduction
TL;DR
Today’s AI systems are becoming increasingly agentic and interconnected, giving rise to a future of multi-agent (MA) systems (MAS). It is believed that this will introduce unique risks and thus require novel safety approaches. Current research evaluating and steering MAS is focused on behavior alone i.e inputs and outputs. However, we hypothesize that internal-based techniques might provide higher signal in certain settings. To study this hypothesis, we sought to answer the following questions in multi-LLM agent Iterated Prisoner’s Dilemma (PD):
* Can we find meaningful safety-relevant internal representations?
* Do mechanistic interpretability (MI) techniques enable more robust steering compared to baselines e.g. prompting?
We observe that PD agents systematically develop internal representations associated with deception. If these results were to generalize to real-world settings, MI tools could be used to enhance monitoring systems in dynamic MA settings.
The objectives for this post are:
* Outline our approach and results during the Women in AI Safety Hackathon in a more accessible format.
* Present additional experiments run to address the feedback received
* Sketch a potential project plan for future work
Background and Motivation
From Single- to Multi-Agent AI Safety. In addition to interventions that reduce risks from individual AI systems we might need safety measures that focus on the interactions among AI agents, between them, humans and surrounding digital or physical infrastructure. Such measures could be classified into the following broad clusters:
* Monitoring: Methods and infrastructure to evaluate MA risks
* Behavioral (based on inputs and outputs) or mechanistic (based on model internals) evaluations to:
* Evaluate cooperative capabilities and propensities
* Test for dangerous capabilities e.g manipulation, overriding safeguards
* Oversight layers: deployment-time infrastructure that enables actors |
50f1ef9f-ad92-4f90-95aa-5c611975308c | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Invulnerable Incomplete Preferences: A Formal Statement
*Produced as part of the* [*SERI ML Alignment Theory Scholars Program*](https://serimats.org/) *- Summer 2023 Cohort. My thanks to Eric Chen, Elliott Thornley, and John Wentworth for invaluable discussion and comments on earlier drafts. All errors are mine.*
This article presents a few theorems about the invulnerability of agents with incomplete preferences. Elliott Thornley’s ([2023](https://www.alignmentawards.com/winners#background)) proposed approach to the AI shutdown problem relies on these preferential gaps, but [John Wentworth and David Lorell](https://www.alignmentforum.org/posts/bzmLC3J8PsknwRZbr/why-not-subagents) have argued that they make agents play strictly dominated strategies.[[1]](#fn3m1r2zyk7s5) I claim this is false.
Summary
-------
Suppose there exists a formal description of an agent that willingly shuts down when a certain button is pressed. Elliott Thornley’s (2023) Incomplete Preference Proposal aims to offer such a description. It’s plausible that, for it to constitute a promising approach to solving the AI shutdown problem, this description also needs to (i) permit the agent to be broadly capable and (ii) assure us that the agent will remain willing to shut down as time passes. This article formally derives a set of sufficient conditions for an agent with incomplete preferences to satisfy properties relevant to (i) and (ii).
A seemingly relevant condition for an agent to be capably goal-directed is that it avoids sequences of actions that foreseeably leave it worse off.[[2]](#fntinxlqvh5gl) I will say that an agent satisfying this condition is *invulnerable*. This is related to two intuitive conditions. The weaker one is *unexploitability*: that the agent cannot be forcibly money pumped (i.e., compelled by its preferences to sure loss). The stronger condition is *opportunism*: that the agent never accepts sure losses or foregoes sure gains.[[3]](#fnj21kg28mi9h)
To achieve this, I propose a dynamic choice rule for agents with incomplete preferences. This rule, *Dynamic Strong Maximality* (DSM), requires that the agent consider the available plans that are acceptable at the time of choosing and, among these, pick any plan that wasn’t previously strictly dominated by any other such plan. I prove in section 2 that DSM-based backward induction is sufficient for invulnerability, even under uncertainty.
Having shown that incompleteness does not imply that agents will pursue dominated strategies, I consider the issue of whether DSM leads agents to act as if their preferences were complete. Section 3 begins with a conceptual argument suggesting that DSM-based choice under uncertainty will not, even behaviourally, effectively alter the agent’s preferences over time. This argument does not apply when the agent is unaware of the structure of its decision tree, so I provide some formal results for these cases which bound the extent to which preferences can de facto be completed.
These results show that there will always be sets of options with respect to which the agent never completes its preferences. This holds no matter how many choices it faces. In particular, if no new options appear in the decision tree, then no amount of completion will occur; and if new options do appear, the amount of completion is permanently bounded above by the number of mutually incomparable options. These results apply naturally to cases in which agents are unaware of the state space, but readers sceptical of the earlier conceptual argument can re-purpose them to make analogous claims in standard cases of certainty and uncertainty. Therefore, imposing DSM as a choice rule can get us invulnerability without sacrificing incompleteness, even in the limit.
1. Incompleteness and Choice
=============================
The aim of this brief section is to show that the results that follow in section 2 do not require transitivity. Some question the requirement of transitivity when preferences are incomplete (cf [Bradley 2015](https://personal.lse.ac.uk/bradleyr/pdf/Transitive%20Choice%20(NBFest4).pdf), p. 3), but if that doesn’t apply to you, a quick skim of this section will provide enough context for the rest.
1.1. Suzumura Consistency
--------------------------
Requiring that preferences be transitive may require that they be complete. To see this, notice that the weak preference relation ⪰.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
on a set of prospects X is transitive just in case ∀α,β,γ∈X, α⪰β and β⪰γ implies α⪰γ. Suppose an agent does weakly prefer α to β and β to γ but has no preference between α and γ. Then transitivity is violated. Suzumura ([1976](https://www.jstor.org/stable/2553273)) proposes a weakening of transitivity for agents with incomplete preferences, which allows for such preferences while preserving some desirable properties. We will say that a weak preference relation is *strongly acyclic* just in case
**(Strong Acyclicity)** ∀α1,...,αn∈X, α1⪰...⪰αn implies αn⊁α1.
We'll say that an agent whose preferences satisfy this property is *Suzumura consistent*. Bossert and Suzumura ([2010](https://www.jstor.org/stable/j.ctvjhzq9s)) show that such an agent has some noteworthy features:
1. Strong acyclicity rules out cycles containing at least one strict preference. This will make Suzumura consistent agents invulnerable to (forcing) money pumps.
2. Strong acyclicity is necessary and sufficient for the existence of a complete and transitive extension of the agent’s preference relation.
3. Any preference relation that is both strongly acyclic and complete is transitive.
Preferences that are incomplete may also be intransitive. Whether or not transitivity is a rationality condition, strong acyclicity is weaker but preserves some desirable properties. Below I will mostly just assume strong acyclicity. But since transitivity implies strong acyclicity, all the (sufficiency) results likewise apply to agents with transitive preferences.
1.2. Strong Maximality
-----------------------
Bradley ([2015](https://personal.lse.ac.uk/bradleyr/pdf/Transitive%20Choice%20(NBFest4).pdf)) proposes a choice rule for Suzumura consistent agents which differs from the standard condition—Maximality—for agents with incomplete preferences.[[4]](#fn0luewbcqek9) This rule—Strong Maximality—effectively asks the agent to eliminate dominated alternatives in the following way: eliminate any options that you strictly disprefer to any others, then if you are indifferent between any remaining options and any eliminated ones, eliminate those as well. To state the rule formally, we define a sequence ⟨¯Cτ(A)⟩∞τ=0 satisfying
¯C0(A)={α∈A:β≻α∃β∈A} and ¯Cτ(A)={α∈A:[β∼α]∧[β∈¯Cτ−1(A)]∃β∈A} whenever τ≥1
for any nonempty set of prospects A. We can then state the rule as
**(Strong Maximality)** CSM(A)={α∈A:α∉⋃∞τ=0¯Cτ(A)}.
Let’s see intuitively what this rule captures. Suppose α≻β and β∼γ but α⋈/γ.[[5]](#fn1qvo9b7x78t) The traditional maximality rule implies that C({α,β,γ})={α,γ}. But strong maximality simply outputs {α}. This is intuitive: don’t pick an option that’s just as bad as an option you dislike. And, more importantly, Theorem 2 of Bradley (2015) shows that Suzumura consistency is both necessary and sufficient for decisive, strongly maximal choice.[[6]](#fnm5fwt63x78s)
2. Uncertain Dynamic Choice
============================
In this section I prove some theorems about the performance of agents with incomplete preferences in dynamic settings. I will amend strong maximality slightly to adapt it to dynamic choice, and show that this is sufficient to guarantee the invulnerability of these agents, broadly construed. I will say that an agent is *invulnerable* iff it is both unexploitable and opportunistic.
An agent is *unexploitable* just in case it is immune to all forcing money pumps. These are sequences of decisions through which an agent is compelled by their own preferences to sure loss. An agent is *opportunistic* just in case it is immune to all non-forcing money pumps. These are situations in which sure loss (or missed gain) is merely permissible, according to an agent’s preferences.
A few aspects of the broad approach are worth flagging. Normative decision theory has not settled on an ideal set of norms to govern dynamic choice. I will therefore provide results with respect to each of the following three dynamic choice principles: naivety, sophistication, and resoluteness. (More details below.) Agents will in general behave differently depending on which principle they follow. So, evaluating the behaviour resulting from an agent’s preferences should, at least initially, be done separately for each dynamic choice principle.
2.1. Framework
---------------
The notation going forward comes from an edited version of Hammond's ([1988](https://link.springer.com/article/10.1007/bf00129168)) canonical construction of Bayesian decision trees. I will only describe the framework briefly, so I’ll refer interested readers to Rothfus ([2020a](http://gerardrothfus.com/wp-content/uploads/2021/02/University_of_California_Irvine_Thesis-3.pdf)) section 1.6. for discussion.
**Definition 1** A *decision tree* is an eight-tuple ⟨N,C,N,X,N+(⋅),n0,S(⋅),γ(⋅)⟩ where
1. N is a finite set of nodes partitioned into C, N, and X.
2. C is the set of choice nodes. Here agents can pick the node’s immediate successor.
3. N is the set of natural nodes. Agents have credences over their possible realisations.
4. X is the set of terminal nodes. These determine the outcome of a trajectory.
5. N+:N→P(N) is the immediate successor function.
6. n0 is the initial node.
7. S:N→P(S) assigns the set of states that remain possible once a node is reached.
8. γ(x) is the consequence of reaching terminal node x.
**Definition 2** A *set of plans* available at node n of tree T, denoted Ω(T,n), contains the propositions and continuations consistent with the agent being at that node. Formally,
1. Ω(T,n)={S(n)} if n∈X.
2. Ω(T,n)={S(n′)∧π(n′):n′∈N+(n),π(n′)∈Ω(T,n′)} if n∈C.
3. Ω(T,n)={⋀|N+(n)|j=1[S(nj)→π(nj)]:nj∈N+(n),π(nj)∈Ω(T,nj)} if n∈N.[[7]](#fnnh3nxfrjlg)
I will begin the discussion below with dynamic choice under certainty, but to set up the more general results, I will now lay out the framework for uncertain choice as well and specify later what is assumed. To start, it will be useful to have a representation theorem for our incomplete agents.
**Imprecise Bayesianism (**[**Bradley 2017**](https://personal.lse.ac.uk/bradleyr/pdf/Decision%20theory%20with%20a%20human%20face.pdf)**, Theorem 37)** *Let*⪰*be a non-trivial preference relation on a complete and atomless Boolean algebra of prospects*Ω=⟨X,⊨⟩*, which has a minimal coherent extension on*Ω*that is both continuous and impartial.*[[8]](#fnhzkltiisi9d)*Then there exists a unique rationalising state of mind,*S={⟨Pi,Vi⟩}*, on*Ω*containing all pairs of probability and desirability measures jointly consistent with these preferences, in the sense that for all*α,β∈Ω∖{⊥}*,*
α⪰β⟺Vi(α)≥Vi(β)∀Vi∈S.
This theorem vindicates the claim that the state of mind of a broad class of rational agents with incomplete preferences can be represented by a *set* of pairs of probability and desirability functions. Although this theorem applies to imprecise credences, I’ll work with the special case of precise beliefs throughout. This will simplify the analysis. I’ll therefore use the following notation going forward: Z={i∈N:Vi∈S}.
Next, I define some conditions that will be invoked in some of the derivations.
**Definition 3** *Material Planning*: For a plan to specify choices across various contingencies, I formalise a planning conditional as follows. Let n be a natural node and nj∈N+(n) a possible realisation of it. Here, a plan
p(n)=S(n)∧⋀|N+(n)|j=1[S(nj)→S(zj)]
assigns a chosen continuation, zj∈N+(nj), to each possible realisation of n. When preferences are complete, a plan at a natural node is then evaluated as follows:
Vn(p)=∑|N+(n)|j=1Vn(zj)P(nj|p(n)).
This is a natural extension of [Jeffrey’s equation](https://plato.stanford.edu/entries/decision-theory/#JefThe). And when preferences are incomplete:
p⪰np′ iff Vin(p)≥Vin(p′)∀i∈Z.
This makes use of Bradley’s representation theorem for Imprecise Bayesianism.
**Definition 4** *Preference Stability (Incomplete)*: For all nodes na and nb and plans p where p(na)=p(nb), we have Vina(p)=Vinb(p)∀i∈Z.
**Definition 5** *Plan-Independent Probabilities (PIP)*: For any decision tree T in which n is a natural node, n′∈N+(n) is a realisation of n, and p∈Ω(T,n), P(n′|p(n))=P(n′|n).
The results can now be stated. I will include proofs of the central claims in the main text; others are relegated to the appendix. I suggest scrolling past them if you aren’t particularly surprised by the result.
2.2. Myopia
------------
Let’s begin with the simplest case of exploitation. We can in fact stay silent on which dynamic choice principle to employ here: even if our agent is myopic, it will never be vulnerable to forcing money pumps under certainty. This follows immediately from the fact that Suzumura consistency guarantees that the agent never has a strict preference in a cycle of weak preferences.
**Proposition 1** *Suzumura consistent agents myopically applying strong maximality are unexploitable under certainty.* [[Proof](https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1#Appendix__Proofs).]
Two points are worth noting about the simple proof. First, it relies on strict preferences. This is because, if a money pump *must* go through a strongly maximal choice set with multiple elements (either due to indifference or appropriate incomparability), it is necessarily non-forcing. This will be the topic of the next section. Second, the proof does not rely on foresight. Myopia is an intentionally weak assumption that allows us to show that no knowledge of the future is required for Suzumura consistent agents to avoid exploitation via these forcing money pumps.
2.3. Naive Choice
------------------
Although Suzumura consistent agents using strong maximality can't be forced into money pumps, concern remains. Such an agent might still *incidentally* do worse by their own lights. That is, it remains vulnerable to non-forcing money pumps and thereby fails to be opportunistic. The single-souring money pump below is a purported example of this.
Figure 1. Adapted from Gustafsson (2022, Figure 9).The agent’s preferences satisfy A≻A− and B⋈/A,A−. Suppose that it’s myopic and uses strong maximality at each node. The agent begins with A at node 0 (if we let ‘down’ be the default). It is permitted, though not compelled, to go ‘up’ at node 0 instead (since B will become available), but also to go ‘up’ upon arrival at node 1 (since A−⋈/B). Suppose it in fact goes ‘up’ at both node 0 and at node 1. This would leave it with A−, which is strictly worse than what it began with. This money pump is ‘non-forcing’ because the agent’s preferences are also consistent with avoiding it.
The agent need not be myopic, however. It can plan ahead.[[9]](#fn55pbejubw92) To achieve opportunism for Suzumura consistent agents, I propose a choice rule which I’ll dub *Dynamic Strong Maximality* (DSM). DSM states that a plan p is permissible at node n just in case (a) it is strongly maximal at n and (b) no other plan is both strongly maximal at n and previously more choiceworthy than p. Formally,
**(Dynamic Strong Maximality)** p∈D(Ω(T,n)) iff
(a) p∈CSM(Ω(T,n)) and
(b) ∄p∗∈CSM(Ω(T,n)):p∗∈CSM(Ω(T,n0))∌p .
DSM is a slight refinement of strong maximality. Condition (b) simply offers a partial tie-breaking rule whenever the agent faces multiple choiceworthy prospects. So, importantly, it never asks the agent to pick counter-preferentially. An agent following *naive choice* with DSM will, at each node, look ahead in the decision tree, select their favourite trajectory using DSM, and embark on it. It can continually re-evaluate its plans using naive-DSM as time progresses and, as the following result establishes, the agent will thereby never end up with something worse than what it began with.
**Proposition 2 (Strong Dynamic Consistency Under Certainty via Naivety)** *Let*T*be an arbitrary tree where*n*is a choice node,*n′∈N+(n)*, and*p∈Ω(T,n)*is consistent with*S(n′)*. Then*p(n)∈D(Ω(T,n))*iff*p(n′)∈D(Ω(T,n′))*.* [[Proof](https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1#Appendix__Proofs).]
Intuitively, this means that (i) if the agent now considers a trajectory acceptable, it will continue to do so as time passes, and (ii) if it at any future point considers some plan continuation acceptable, its past self would agree. It follows immediately that all and only the strongly maximal terminal nodes are reachable by agents choosing naively using DSM (derived as [Corollary 1](https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1#Appendix:_Proofs)). This gives us opportunism: the agent will never pick a plan that’s strictly dominated some other available plan.
Under certainty, this result is unsurprising.[[10]](#fnvh83xvdhpj) What is less obvious is whether this also holds under uncertainty. I will say that a Bayesian decision tree exhibits *PIP-uncertainty* just in case the probability of any given event does not depend on what the agent plans to do after the event has occurred. We can now state the next result.
**Proposition 3 (Strong Dynamic Consistency Under PIP-Uncertainty via Naivety)**
*Let node*n*be non-terminal in decision tree*T*,*n′∈N+(n)*, and plan*p∈Ω(T,n)*be consistent with*S(n′)*. Assume Material Planning, Preference Stability, and Plan-Independent Probabilities. Then*p∈D(Ω(T,n))*iff*p∈D(Ω(T,n′))*.*
**Proof.** Lemma 3 establishes that p∈D(Ω(T,n)) implies p∈D(Ω(T,n′)). To prove the converse, suppose that p∈D(Ω(T,n′)). Node n was either a choice node or a natural node. If it was a choice node, then it follows immediately from Proposition 2 that p∈D(Ω(T,n)).
Now let n be a natural node. By Lemma 2, COpt(⋅)=CSM(⋅)=D(⋅) under coherent extension. So by Theorem 37 of Bradley (2017),
Vin′(p(n′))≥Vin′(p′(n′))∃i∈Z for all p′∈Ω(T,n′).
Let z′[q] denote the continuation selected by plan q upon reaching choice node n′. Thus
Vin′(z′[p])≥Vin′(z′[p′])∃i∈Z for all p′∈Ω(T,n′).
By Preference Stability,
Vin(z′[p])≥Vin(z′[p′])∃i∈Z for all p′∈Ω(T,n′). (1)
Notice that this implies that
Vin(z′[p])P(n′|n)≥Vin(z′[p′])P(n′|n)∃i∈Z for all p′∈Ω(T,n′). (2)
And by Plan-Independent Probabilities, this is equivalent to
Vin(z′[p])P(n′|p(n))≥Vin(z′[p′])P(n′|p′(n))∃i∈Z for all p′∈Ω(T,n′). (3)
By Material Planning, p∈CSM(Ω(T,n)) holds iff
∑jVin(zj[p])P(nj|p(n))≥∑jVin(zj[p′])P(nj|p′(n))∃i∈Z for all p′∈Ω(T,n′).
Using (2)-(3) this is equivalent to
Vin(z′[p])P(n′|n)+∑j≠j′Vin(zj[p])P(nj|n)≥Vin(z′[p′])P(n′|n)+∑j≠j′Vin(zj[p′])P(nj|n)
for some i∈Z and all p′∈Ω(T,n′). (4)
Formally, z′[p]=p(n′) omits description of p(nj) for all nj∈N+(n)∖{n′}. So, we can let p(nj)=p′(nj) at all remaining realisations nj. Then (4) reduces to
Vin(z′[p])P(n′|n)≥Vin(z′[p′])P(n′|n)
Vin(z′[p])≥Vin(z′[p′]). (The probability is nonzero as n′ is realised by assumption.)
This holds via (1), so p∈CSM(Ω(T,n)) and, by Lemma 2, p∈D(Ω(T,n)). ■
We have thereby shown that a naive DSM agent is strongly dynamically consistent in a very broad class of cases. Although these results are restricted to PIP-uncertainty, this applies to agents with complete preferences too. It’s just a result of the forward-looking nature of naive choice. In the next section, I will drop the PIP restriction by employing a different dynamic choice principle.
2.4. Sophisticated Choice
--------------------------
Sophistication is the standard principle in dynamic choice theory. It achieves dynamic consistency in standard cases by using backward induction to achieve the best feasible outcome. (This is also the method by which subgame perfect equilibria are found in dynamic games of perfect information.) However, backward induction is undefined whenever the prospects being compared are either incomparable or of equal value. Rabinowicz ([1995](https://www.jstor.org/stable/2941089)) proposed a ‘splitting procedure’ to address this, and Asheim ([1997](https://www.jstor.org/stable/2971722)) used a similar approach to refine subgame perfection in games.
According to this procedure, whenever there is a tie between multiple prospects, the agent will split the sequential decision problem into parts, where each part assumes that a particular prospect was chosen at that node. The agent then compares each part’s solution and picks its favourite as the solution to the grand decision problem. Intuitively, these agents will follow a plan as long as they lack “a positive reason for deviation” ([Rabinowicz 1997](https://link.springer.com/article/10.1023/A:1004920611437)). In other words, the agent will consider all the plans that would not make it locally act against their own preferences and, among those plans, proceed to pick the one with the best ex-ante outcome.
In the case of incomplete preferences, however, it turns out that strongly maximal sophisticated choice with splitting will *not* suffice to guarantee opportunism. The reason behind this is that strong maximality does not satisfy Set Expansion, and that backward induction makes local comparisons.[[11]](#fnd70xugugide)
**Proposition 4** *Strongly maximal sophisticated choice with splitting does not guarantee opportunism for Suzumura consistent agents.* [[Proof](https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1#Appendix__Proofs).]
DSM, however, will suffice.
**Proposition 5 (Strong Dynamic Consistency Under Certainty via Sophistication)**
*Assume certainty and Suzumura consistency. Then DSM-based backward induction with splitting reaches a strongly maximal terminal node.* [[Proof](https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1#Appendix__Proofs).]
More importantly, we can guarantee this property under uncertainty even without PIP.
**Proposition 6 (Strong Dynamic Consistency Under Uncertainty via Sophistication)**
*Let*T*be a Bayesian decision tree in which*n*is a non-terminal node,*n′∈N+(n)*is a successor, and*p∈Ω(T,n)*is consistent with*S(n′)*. Assume Material Planning and Preference Stability. Then with DSM-based backward induction (DSM-BI),*p*is permissible at*n*iff*p*is permissible at*n′*.*
**Proof.** We begin by formalising DSM-BI. For tree T, let X denote its terminal nodes, C its choice nodes, and N its natural nodes. Let I[n,A]={p∈Ω(T,n):p(n′)∈A} denote the set of plans available at n that are consistent with some associated plan continuation in A. We then define the tree’s permissible set of plans, Ψ(T), recursively:
1. ~Ψ(T,n)={S(n)} for n∈X.
2. ~Ψ(T,n)=D(I[n,Φ(n)]) where Φ(n)=⋃ni∈N+(n)~Ψ(T,ni) , for n∈C.
3. ~Ψ(T,n)=⋂ni∈N+(n)I[n,~Ψ(T,ni)] for n∈N.
The permissible set of plans is then given by Ψ(T)≜~Ψ(T,n0). Notice that this is the set of plans consistent with backward induction using DSM, and that splitting is implicit since each pruning of the decision tree is set-valued.
Now suppose that p∈~Ψ(T,n) for some n∈C.
Then p∈D(I[n,Φ(n)]) , where Φ(n)=⋃ni∈N+(n)~Ψ(T,ni).
So necessarily p∈I[n,Φ(n)]. By construction of I[⋅,Φ(⋅)], we know that p(n′)∈Φ(n).
And since Φ(n)=~Ψ(T,n′)∪⋃ni≠n′~Ψ(T,ni), we have p(n′)∈~Ψ(T,n′).
Next, suppose that p∈~Ψ(T,n) for some n∈N. Then p∈⋂ni∈N+(n)I[n,~Ψ(T,ni)] . So, for any particular n′∈N+(n), p∈I[n,~Ψ(T,n′)]. Therefore p(n′)∈~Ψ(T,n′) as needed.
Having established p∈~Ψ(T,n)⟹p(n′)∈~Ψ(T,n′), we proceed to the converse. (∗)
Notice that the only nodes reachable via sophisticated choice are those that are consistent with some plan p∈Ψ(T) which was initially permissible. Therefore, for any n′ that is reached, there must be some p∈Ω(T,n), consistent with n′ where n′∈N+(n), which satisfies p∈~Ψ(T,n).
Suppose p(n′)∈~Ψ(T,n′) for some reachable n′∈N. Whether n is a choice or a natural node, we know by construction of (2) and (3) that
~Ψ(T,n)⊆I[n,⋃ni∈N+(n)~Ψ(T,ni)].
Therefore, if n′ is reached, then the permissible plans at n′ must be continuations of plans that were permissible at n. Hence p(n)∈~Ψ(T,n).
Together with (∗), this establishes that p∈~Ψ(T,n) iff p(n′)∈~Ψ(T,n′). ■
This lets us achieve strong dynamic consistency even in cases where the probability of an event depends on what the agent plans to do after the event has occurred. An example of such a decision problem is *Sequential Transparent Newcomb*, as devised by Bryan Skyrms and Gerald Rothfus (cf [Rothfus 2020b](https://link.springer.com/article/10.1007/s11098-020-01415-0)). So, even in this very general setting, our agent’s incomplete preferences aren’t hindering its opportunism.
2.5. Resolute Choice
---------------------
A final canonical principle of dynamic choice is *resoluteness* as introduced by McClennen ([1990](https://www.cambridge.org/core/books/rationality-and-dynamic-choice/97BF3448ED05F4BB2515C3E739BCEB1A)). An informal version of this principle is often discussed in the AI alignment community under the name ‘[updatelessness](https://www.alignmentforum.org/tag/updateless-decision-theory)’. Briefly, a resolute agent will, first, pick its favourite trajectory now under the assumption that this plan will continue to be implemented as the agent moves through the decision tree. And, second, the agent will continually implement that plan, even if this makes it locally choose counter-preferentially at some future node. It respects its ex-ante commitments.
Under certainty or uncertainty, this is the easiest principle with which to guarantee the invulnerability of agents with incomplete preferences (relative to those with complete preferences). By construction, the agent never implements a plan that was suboptimal from an earlier perspective. I will therefore omit formal derivations of dynamic consistency for resolute choosers.[[12]](#fnj0uv9nirha)
3. The Trammelling Concern
===========================
A possible objection to the relevance of the results above is what I’ll call the *Trammelling Concern.* According to this objection, agents with incomplete preferences who adopt DSM as a dynamic choice rule will, in certain sequential choice problems, or over the course of sufficiently many and varied ones, eventually converge in behaviour to an agent with complete preferences using Optimality as a choice rule. This would be worrying since complete preferences and optimal choice resembles the kind of consequentialism that precludes Thornley-style corrigibility.
This section aims to quell such worries. I begin with a taxonomy of our agent’s possible doxastic attitudes towards decision trees. First, the set of non-terminal *nodes* of a given tree will either contain only choice nodes, or it will contain some natural nodes. Second, the *structure* of a given tree will either be entirely known to the agent, or it will not. If it is not known, the agent will either have beliefs about all its possible structures, or its credence function will not be defined over some possible tree structures. We will refer to these three cases as *certainty*, *uncertainty*, and *unawareness* about structure, respectively.[[13]](#fngj4o2jvnus) Finally, at least under unawareness, the possible tree structures may or may not include terminal nodes (prospects) that are also present in the tree structure currently being considered by the agent. I will say the prospects are *new* if so and *fixed* if not.
Table 1.This table summarises the situations our agent might find itself in and its cells indicate whether trammelling can occur in each case. This is based on the arguments and results below. The conclusions for (1)-(2) are based on conceptual considerations in section 3.1. Cases (3)-(4) are not discussed explicitly since uncertain tree structures can simply be represented as certain tree structures with natural nodes. The conclusions for (5) and (7) are based on formal results in section 3.2. For the sake of brevity, the effects of natural nodes under unawareness are left for later work.
3.1. Aware Choice
------------------
*This section was co-authored with* [*Eric*](https://www.linkedin.com/in/eric-chen-449317272/)[*Chen*](https://www.lesswrong.com/users/eric-chen)*.*
Let’s begin with an important distinction. As Bradley (2015) puts it, there’s a difference between "the choices that are *permissible* given the agent’s preferences, those that are *mandatory* and those that she *actually* makes" (p. 1). If an agent is indifferent between two options, for example, then it can be the case that (i) both are permissible, (ii) neither is mandatory, and (iii) a particular one is in fact chosen. One aspect of choice between incomparables that we need to preserve is that all were permissible ex-ante. The fact that one is in fact chosen, ex-post, is immaterial.
To see what this implies, consider first the case of certainty. Here we proved that DSM will result in only ex-ante strongly maximal plans being implemented (Proposition 2; Corollary 1; Proposition 5). Now consider the following toy case (Example 1). A Suzumura consistent agent can’t compare A and B. It’s offered a choice between the two. Suppose it chooses B. Does this constrain the agent to never pick A in future choices between A and B?
No. In fact, it’s a bit unclear what this would mean formally. Once we expand the decision tree to include future choices between A and B in some branches, everything boils down to the fact that all plans through this tree will result in either A or B. And so any plan will be permissible. What DSM constrains is which terminal nodes are reached; not how the agent gets there. Let’s see how far this reasoning can get us.
**Example 2** A Suzumura consistent agent’s preferences satisfy A≻A− and B⋈/A,A−. It’s offered a choice between going ‘down’ for A or going ‘up’ to face a choice between A− and B. It happens to go ‘up’. Does this constrain the agent not to pick A− over B?
This is a case of an agent ending up on some trajectory that is only consistent with one of its coherent extensions ('completions'). One might worry that, at this point, we can treat the agent as its unique extension, and that this denies us any kind of corrigible behaviour. But there is a subtle error here. It is indeed the case that, once the agent has reached a node consistent with only one extension, we can *predict* how it will act going forward via its uniquely completed preferences. But this is not worrisome. To see why, let’s look at two possibilities: one where the agent is still moving through the tree, and another where it has arrived at a terminal node.
First, if it is still progressing through the tree, then it is merely moving towards the node it picked at the start, by a process we are happy with. In Example 2, option A− was never strongly maximal in comparison to A and B. We knew, even before the agent went ‘up’, that it wouldn’t pick A−. So the agent is no more trammelled once it has gone ‘up’ than it was at the beginning.
For a more concrete case, picture a hungry and thirsty mule. For whatever reason, this particular mule is unable to compare a bale of hay to a bucket of water. Each one is then placed five metres from the mule in different directions. Now, even if the mule picks arbitrarily, we will still be able to predict which option it will end up at once we see the direction in which the mule is walking. But this is clearly not problematic. We wanted the mule to pick its plan arbitrarily, and it did.
Second, if the agent has reached the end of the tree, can we predict how it would act when presented with a new tree? No; further decisions just constitute a continuation of the tree. And this is just the case we described above. If it did not *expect* the continuation, then we are simply dealing with a Bayesian tree with natural nodes. In the case of PIP-uncertainty, we saw that DSM will once again only result in the ex-ante strongly maximal *plans* being implemented (Proposition 3).[[14]](#fnufpnjx93gqn) Here it's useful to recall that plans are formalised using conditionals:
p(n)=S(n)∧⋀|N+(n)|j=1[S(nj)→S(zj)], where nj∈N+(n) and zj∈N+(nj).
That is, plans specify what path to take at every contingency. The agent selects *plans*, not merely prospects, so no trammelling can occur that was not already baked into the initial choice of plans. The general observation is that, if many different plans are acceptable at the outset, DSM will permit the agent to follow through on any one of these; the ex-post behaviour will “look like” it strictly prefers one option, but this gets close to conflating what options are *mandatory* versus *actually chosen*. Ex-post behavioural trajectories do not uniquely determine the ex-ante permissibility of plans.
3.2. Unaware Choice
--------------------
The conceptual argument above does not apply straightforwardly to the case of unawareness. The agent may not even have considered the possibility of a certain prospect that it ends up facing, so we cannot simply appeal to ex-ante permissibility. This section provides some results on trammelling in this context. But, first, given the argument about Bayesian decision trees under certain structure, one may ask: what, substantively, is the difference between the realisation of a natural node and awareness growth (if all else is equal)? Would the behaviour of our agent just coincide? I don't think so.
**Example 3** Consider two cases. Again, preferences satisfy A≻A− and B⋈/A,A−.
Figure 2.Up top, we have a Bayesian tree. On the bottom, we have awareness growth, where each tree represents a different awareness context. In both, let’s suppose our agent went ‘up’ at node 0. In the Bayesian case, let’s also suppose the natural node (1) resolves such that the agent reaches node 2. In both diagrams, the agent ends up having to pick A− or B.
The permissible plans at the outset of the Bayesian tree are those that lead to A or to B (since A− is strictly dominated by A). But in the bottom case, it seems that our agent faces two distinct decision problems: one between A and B; one between A− and B. In the first problem, both options are permissible. And likewise in the second. So, the difference between these situations is that, in the latter case, our agent faces a *new decision problem*.
In the Bayesian case, our agent might end up at a future node in which the only available options are incomparable but nevertheless reliably picks one. But the reason for this is that another node, also available at the moment the agent picked its plan, was strictly preferred to one of these. The agent’s decision problem was between A, A−, and B. The arbitrary choice was between A and B. This is as it should be; there is no trammelling.
However, in the case of unawareness, the agent initially chose between A and B and now it must choose between A− and B. It never got to compare A to A− to B. It compared A to B and, separately, A− to B. Therefore, if it reliably picks B over A−, for the sake of opportunism, one could see this as a form of trammelling. I now turn to this issue.
For brevity, I will focus on cases 4 and 5 from the taxonomy above. In these worlds, all decision trees have choice nodes and terminal nodes, but they lack natural nodes. And our agent is unaware of some possible continuations of the tree they are facing. That is, it’s unaware of—places no credence on—the possibility that a terminal node is actually another choice node. But once it in fact gets to such a node, it will realise that it wasn’t terminal. There are then two ways this could play out: either the set of available prospects expands when the tree grows, or it doesn’t. Let’s consider these in turn. As we’ll see, the most salient case is under a certain form of expansion (section 3.2.3., Example 6).
### 3.2.1. Fixed Prospects (No Trammelling)
Here I’ll show that when prospects are fixed, opportunism and non-trammelling are preserved. We begin with an illustrative example.
**Example 4** The agent’s preferences still satisfy A≻A− and B⋈/A,A−.
Figure 3.The agent initially sees the top tree. DSM lets it pick A or B, arbitrarily. Suppose it in fact goes for A. Then its awareness grows and the tree now looks like the bottom one. It can stay put (go ‘down’ to A) or move ‘up’ and choose between B and A−. DSM still says to pick, arbitrarily, any path leading to A or to B. Suppose it goes for B. There—no trammelling.
Here’s the lesson. Our agent will embark on a path that’ll lead to a node that is DSM with respect to the initial tree. When its awareness grows, that option will *remain available* in the new tree. By Set Contraction ([Lemma 4](https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1#Appendix:_Proofs)) this option will also *remain DSM* in the new tree. So, picking it will still satisfy opportunism. And regarding trammelling, it can still pick entirely arbitrarily between all initially-DSM nodes. DSM constrains which prospects are reached; not how to get there. We can state this as follows.
**Proposition 7** *Assume no natural nodes. Then, DSM-based choice under awareness growth will remain arbitrary whenever the set of available prospects is fixed.* [[Proof](https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1#Appendix__Proofs).]
### 3.2.2. New Prospects (No Trammelling)
Now consider the case where the set of available prospects expands with the tree. It's of course possible that our agent gets to a terminal node that isn’t strongly maximal according to the new tree. This could happen if they get to an initially-acceptable terminal node and then realise that a now-inaccessible branch would let them access a prospect that’s strictly preferred to all those in the initial tree.[[15]](#fnr2zvmaagbir) But, importantly, this applies to agents with complete preferences too. It’s simply an unfortunate feature of the environment and the agent’s epistemic state. Complete preferences don’t help here.
But there is a class of cases in which completeness, at first glance, seems to help.
**Example 5** The agent’s preferences satisfy A≻A−∼B and A⋈/B (unlike before).
Figure 4.The agent sees the tree on the left. DSM lets it pick A or B, arbitrarily. Suppose it in fact goes for B. Now its awareness grows and the tree looks like the one on the right. It can stay put (go ‘down’ to B) or move ‘up’ to get A−.
Now, none of the available options are DSM in the full tree, but the agent still has to pick one. This, on its own, is not problematic since it could just be an unfortunate aspect of the situation (as dismissed above). But in this particular case, because an agent with complete preferences would satisfy transitivity, it would never go for B in the first place. This kind of issue wouldn’t occur if our agent’s incomplete preferences were transitive, rather than just strongly acyclic.
So let’s impose transitivity. Then our agent wouldn’t pick B in the first place. In general it will now remain opportunistic in the same way that its complete counterparts are. That’s because it would never initially choose arbitrarily between options that *all* its coherent extensions have a strict preference between. So it would never pick in a way that makes it realise that it made a foreseeable mistake once more options come into view. To operationalise this, I will say that an agent facing deterministic decision trees *fails* to be opportunistic under awareness growth just in case
1. The agent fails to reach a strongly maximal prospect in the grand tree, *and*
2. A strongly maximal prospect in the grand tree was available in the initial tree.
This agent *is* opportunistic otherwise.[[16]](#fnfsy0q5xojbp) We can now state the associated result.
**Proposition 8** *Assume transitive preferences and no natural nodes. Then, under all possible forms of awareness growth, naive choice via DSM is opportunistic.* [[Proof](https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1#Appendix__Proofs).]
That gives us opportunism, but what about non-trammelling? Will our agent ever *have* to pick one option over another despite them being incomparable? Yes, in some cases.
### 3.2.2. New Prospects (Bounded Trammelling)
**Example 6** The agent’s preferences satisfy A≻A− and B⋈/A,A−.
Figure 4 (repeated).It faces the same trees as in the previous example. DSM first lets it pick A or B, arbitrarily. Suppose it goes for B. Now its awareness grows and the tree looks like the one on the right. It can stay put (go ‘down’ to B) or move ‘up’ to get A−. Since only B is DSM in the global tree, our agent might *reliably* pick B over A−, despite them being incomparable. Trammelled.
This is a special property of unawareness. Thankfully, however, we can bound the degree of trammelling. And I claim that this can be done in a rather satisfying way. To do this, let’s formally define an extension of DSM that will give us opportunism under awareness growth. Let T1 denote the initial tree, with initial node n0, and T2 the tree available after awareness growth, with initial node n0. And let T denote the appending of T2 on T1.
**(Global-DSM)** p∈G(Ω(T2,n0)) iff
(i) p∈CSM(Ω(T2,n0)) and
(ii) ∄p∗∈CSM(Ω(T2,n0)):p∗∈CSM(Ω(T,n0))∌p.
Notice that G-DSM doesn’t look at what happens at other previously-terminal nodes after awareness grows. These are unreachable and would leave the choice rule undefined in some cases. (This holds with complete preferences too.) This rule naturally extends opportunism to these cases; it says to never pick something if you knew that you could’ve had something better before. We can now state some results.
**Proposition 9 (Non-Trammelling I)** *Assume transitive preferences and no natural nodes. Then, under all possible forms of awareness growth, globally-DSM naive choice will remain arbitrary whenever the available prospects are (i) mutually incomparable and (ii) not strictly dominated by any prospect.* [[Proof](https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1#Appendix__Proofs).]
That gives us a sufficient condition for arbitrary choice. To find out how often this will be satisfied, we define ‘comparability classes’ as subsets of prospects within which all are comparable and between which none are. A comparability-based partitioning of the prospects is possible when comparability is transitive.[[17]](#fnlvurdlxhnn) The following results then follow.
**Proposition 10 (Bounded Trammelling I)** *Assume that comparability and preference are transitive and that there are no natural nodes. Then, under all possible forms of awareness growth, there are at least as many prospects between which globally-DSM naive choice is guaranteed to be arbitrary as there are comparability classes.*
**Proof.** We first partition the (possibly uncountable) set of prospects X into (possibly uncountable) subsets of (possibly uncountable) mutually comparable prospects.
Given transitivity, ⋈ is an equivalence relation on X. Then, for any x∈X, we construct an equivalence class: [x]⋈≜{y∈X:y⋈x}. Call this set the *comparability class* of x.
This lets {[x]⋈}x∈X form a partition of X.
We identify the class-optimal prospects as follows: X∗≜⋃x∈XCOpt([x]⋈).
Let Ω(T2,n0)⊆X∗. Suppose, for contradiction, that G(Ω(T2,n0))≠Ω(T2,n0).
Since G is a choice function, G(A)⊆A for any set A. Then G(Ω(T2,n0))⊂Ω(T2,n0).
Then, by G-DSM, there exists some p∈Ω(T2,n0) such that
(i’) p∉CSM(Ω(T2,n0)) or
(ii’) ∃p∗∈CSM(Ω(T2,n0)):p∗∈CSM(Ω(T,n0))∌p.
Given transitivity, and since all elements of Ω(T2,n0) are mutually incomparable or indifferent, condition (i’) does not hold. Therefore p∉CSM(Ω(T,n0)).
By transitivity, this implies that p′≻p for some p′∈Ω(T,n0). (1)
But since p∈COpt([x]⋈) for some x∈X, we know that p∈CSM(X). (2)
By Set Contraction, Ω(T,n0)⊆X contradicts (1)-(2).
This establishes that G(Ω(T2,n0))=Ω(T2,n0) whenever Ω(T2,n0)⊆X∗.
The set of all prospects satisfying this is of size |X∗|≥|{[x]⋈}x∈X|, as needed. ■
**Corollary 2 (Bounded Trammelling II)** *Assume that comparability and preference are transitive and that there are no natural nodes. Then, whenever*k*class-optimal prospects are available, choice will be arbitrary between at least*k*prospects.* [[Proof](https://www.lesswrong.com/posts/sHGxvJrBag7nhTQvb/invulnerable-incomplete-preferences-a-formal-statement-1#Appendix__Proofs).]
3.3. Discussion
----------------
It seems we can’t guarantee non-trammelling in general and between all prospects. But we don’t need to guarantee this for all prospects to guarantee it for some, even under awareness growth. Indeed, as we’ve now shown, there are always prospects with respect to which the agent *never* gets trammelled, no matter how many choices it faces. In fact, whenever the tree expansion does not bring about new prospects, trammelling will never occur (Proposition 7). And even when it does, trammelling is bounded above by the number of comparability classes (Proposition 10).
And it’s intuitive why this would be: we’re simply picking out the best prospects in each class. For instance, suppose prospects were representable as pairs ⟨s,c⟩ that are comparable iff the s-values are the same, and then preferred to the extent that c is large. Then here’s the process: for each value of s, identify the options that maximise c. Put all of these in a set. Then choice between any options in that set will *always* remain arbitrary; never trammelled.
Three caveats are worth noting. First, sceptical readers may not agree with our initial treatment of (non-)trammelling under aware choice (i.e., known tree structures, section 3.1.). That section is based on a conceptual argument rather than formal results, so it should be evaluated accordingly. However, at least some of the results from the section on unawareness could be used to dampen many reasonable worries here. Whenever non-trammelling is satisfied under awareness growth with Global-DSM, it will likewise be satisfied, mutatis mutandis, by DSM when the tree structure is known.
Second, we have only considered part of the taxonomy described above. Due to time constraints, we left out discussion of unaware choice in trees with natural nodes. We suspect that extending the analysis to these kinds of cases would not meaningfully affect the main conclusions, but we hope to look into this in later work. Finally, we stopped short of providing a full characterisation of choice under unawareness. The literature has not satisfactorily achieved this even in the case of complete preferences, so this topic falls outside the scope of this article.
Conclusion
----------
With the right choice rule, we can guarantee the invlunerability—unexploitability and opportunism—of agents with incomplete preferences. I’ve proposed one such rule, Dynamic Strong Maximality, which nevertheless doesn’t ask agents to pick against their preferences. What’s more, the choice behaviour this rule induces is *not* representable as the agent having implicitly completed its preferences. Even under awareness growth, the extent to which the rule can effectively complete an agent’s implied preferences is permanently bounded above. And with the framework provided, it’s possible to make statements about which kinds of completions are possible, and in what cases.
This article aims to be somewhat self-contained. In future work, I’ll more concretely consider the implications of this for Thornley’s Incomplete Preference Proposal. In general, however, I claim that worries about whether a competent agent with preferential gaps would in practice (partially) complete its preferences need to engage with the particulars of the situation: the preference structure, the available decision trees, and so on. Full completion won't occur, so the relevant question is whether preferential gaps will disappear *in a way that matters*.
References
----------
Asheim, Geir. 1997. “Individual and Collective Time-Consistency.” *The Review of Economic Studies* 64, no. 3: 427–43.
Bales, Adam. 2023. “Will AI avoid exploitation? Artificial general intelligence and expected utility theory.” *Philosophical Studies*.
Bossert, Walter and Kotaro Suzumura. 2010. “Consistency, Choice, and Rationality.” *Harvard University Press*.
Bradley, Richard and Mareile Drechsler. 2014. “Types of Uncertainty.” *Erkenntnis* 79, no. 6: 1225–48.
Bradley, Richard. 2015. “A Note on Incompleteness, Transitivity and Suzumura Consistency.” In: Binder, C., Codognato, G., Teschl, M., Xu, Y. (eds) Individual and Collective Choice and Social Welfare. Studies in Choice and Welfare. Springer, Berlin, Heidelberg.
Bradley, Richard. 2017. *Decision Theory with a Human Face*. Cambridge: Cambridge University Press.
Gustafsson, Johan. 2022. *Money-Pump Arguments*. Elements in Decision Theory and Philosophy. Cambridge: Cambridge University Press.
Hammond, Peter J. 1988. “Consequentialist foundations for expected utility.” *Theory and Decision* 25, 25–78.
Huttegger, Simon and Gerard Rothfus. 2021. “Bradley Conditionals and Dynamic Choice.” *Synthese* 199 (3-4): 6585-6599.
Laibson, David and Yeeat Yariv. 2007. "Safety in Markets: An Impossibility Theorem for Dutch Books." Working Paper, *Department of Economics, Harvard University*.
McClennen, Edward. 1990. *Rationality and Dynamic Choice: Foundational Explorations*. Cambridge: Cambridge University Press.
Rabinowicz, Wlodek. 1995. “To Have One’s Cake and Eat It, Too: Sequential Choice and Expected-Utility Violations.” *The Journal of Philosophy* 92, no. 11: 586–620.
Rabinowicz, Wlodek. 1997. “On Seidenfeld‘s Criticism of Sophisticated Violations of the Independence Axiom.” *Theory and Decision* 43, 279–292.
Rothfus, Gerald. 2020a. “The Logic of Planning.” Doctoral dissertation, *University of California, Irvine*.
Rothfus, Gerald. 2020b. “Dynamic consistency in the logic of decision.” *Philosophical Studies* 177:3923–3934.
Suzumura, Kotaro. 1976. “Remarks on the Theory of Collective Choice.” *Economica* 43: 381-390.
Thornley, Elliott. 2023. “The Shutdown Problem: Two Theorems, Incomplete Preferences as a Solution.” *AI Alignment Awards*
Appendix: Proofs
----------------
**Proposition 1** *Suzumura consistent agents myopically applying strong maximality are unexploitable under certainty.*
**Proof.** By myopic strong maximality, whenever an agent is presented with a set of alternatives containing a prospect strictly preferred to all others, it is chosen. Suppose such a Suzumura consistent agent were forcibly money pumped: beginning with some α1 and ending with a strictly dispreferred α1−ϵ, with each choice resulting from a strict preference. Then there must be a set of prospects satisfying α1≺α2≺...≺αn−1≺αn where αn=α1−ϵ. This trivially implies that α1⪯α2⪯...⪯αn−1⪯α1−ϵ. By Strong Acyclicity, we have α1⊁α1−ϵ, a contradiction. ■
**Lemma 1 (Dynamic Consistency Under Certainty via Naivety)** *Let*T*be an arbitrary tree where*n*is a choice node,*n′∈N+(n)*, and*p∈Ω(T,n)*is consistent with*S(n′)*. Then*p(n)∈D(Ω(T,n))*implies*p(n′)∈D(Ω(T,n′))*.*
**Proof.** There are no natural nodes, so for any nodes m and m′ consistent with plan x, we have x(m)=x(m′). Therefore, by DSM, p(n)∈D(Ω(T,n)) is equivalent to
p(n′)∈CSM(Ω(T,n)) and
∄p∗∈CSM(Ω(T,n)):p∗∈CSM(Ω(T,n0))∌p(n′).
Noting that Ω(T,n′)⊆Ω(T,n) and that Strong Maximality satisfies Set Contraction (Bradley 2015), we know p(n′)∈CSM(Ω(T,n))⟹p(n′)∈CSM(Ω(T,n′)). So
p(n′)∈CSM(Ω(T,n′)). (1)
Any plan unavailable at n is also unavailable at n′, which implies that
∄p∗∈CSM(Ω(T,n′)):p∗∈CSM(Ω(T,n0))∌p(n′). (2)
By DSM, (1)-(2) is equivalent to p(n′)∈D(Ω(T,n′)). ■
**Proposition 2 (Strong Dynamic Consistency Under Certainty via Naivety)** *Let*T*be an arbitrary tree where*n*is a choice node,*n′∈N+(n)*, and*p∈Ω(T,n)*is consistent with*S(n′)*. Then*p(n)∈D(Ω(T,n))*iff*p(n′)∈D(Ω(T,n′))*.*
**Proof.** There are no natural nodes, so for any nodes m and m′ consistent with plan x, x(m)=x(m′). Suppose that p(n′)∈D(Ω(T,n′)) while p(n)∉D(Ω(T,n)).
Suppose p(n)∉CSM(Ω(T,n)). Then by Set Contraction, p(n)∉CSM(Ω(T,n′)). And since there are no natural nodes, p(n)=p(n′). Hence p(n′)∉D(Ω(T,n′)), a contradiction.
Suppose p(n)∈CSM(Ω(T,n)). By Theorem 2 of Bradley (2015), Strong Maximality is decisive. Therefore, there must be another plan p∗∈CSM(Ω(T,n)) that was initially more choiceworthy; that is, p∗∈CSM(Ω(T,n0))∌p(n). Again, p(n)=p(n′). This shows that p(n′)∉CSM(Ω(T,n0)). Hence p(n′)∉D(Ω(T,n′)), a contradiction.
We have a contradiction, so p(n′)∈D(Ω(T,n′))⟹p(n)∈D(Ω(T,n)) . Using Lemma 1, we have thereby shown that p(n)∈D(Ω(T,n)) iff p(n′)∈D(Ω(T,n′)). ■
**Corollary 1** *Under certainty, all and only strongly maximal terminal prospects are reachable by naive DSM.*
**Proof.** Recall that, under certainty, a plan can be identified by the conjunction of propositions describing the information state at each node of its continuation. A node always logically entails its preceding nodes, so a plan under certainty can simply be identified by its unique terminal node. Let X denote the set of propositions at the terminal nodes.
We can thereby establish, for part (a) of DSM, that CSM(Ω(T,n0))=CSM(X). Part (b) follows trivially at the initial node n0, so D(Ω(T,n0))=CSM(X). Proposition 2 establishes that under certainty, a plan is DSM iff its immediate continuation is DSM. By induction on continuation-consistent nodes, all Strongly Maximal terminal prospects are reachable by DSM. And since D(Ω(T,n))⊆D(Ω(T,n0))=CSM(X) for all nodes n that are consistent with a plan p∈D(Ω(T,n0)), we know that DSM reaches only Strongly Maximal terminal prospects. ■
**Lemma 2** *Under certainty and the coherent extension of a (Strongly Acyclic) preference relation, Dynamic Strong Maximality reduces to Optimality.*
**Proof.** By Theorem 1 of Bradley (2015), when a Strongly Acyclic preference relation is completed, Strong Maximality coincides with Optimality. DSM becomes equivalent to: p∈D(Ω(T,n)) iff
(a’) Plan p is Optimal at n. That is, p(n)⪰p′(n)∀p′∈Ω(T,n).
(b’) No Optimal plan was previously better: ∄p∗∈COpt(Ω(T,n)):p∗(n0)≻p(n0).
Let p,p′∈COpt(Ω(T,n)) and suppose that p′∈COpt(Ω(T,n0)). Then, because we have Ω(T,n)⊆Ω(T,n0), the Set Expansion property of Optimality implies p∈COpt(Ω(T,n0)), i.e., that p(n0)∼p′(n0). These n-Optimal plans were chosen arbitrarily, so this suffices to show that ∄p∗∈COpt(Ω(T,n)):p∗(n0)≻p(n0). Hence (b’) follows from (a’) under certainty. ■
**Lemma 3 (Dynamic Consistency Under PIP-Uncertainty via Naivety)** *Let*n*be a non-terminal node in Bayesain decision tree*T*,*n′∈N+(n)*, and plan*p∈Ω(T,n)*be consistent with*S(n′)*. Assume Material Planning, Preference Stability, and Plan-Independent Probabilities. Then*p∈D(Ω(T,n))*implies*p∈D(Ω(T,n′))*.*
**Proof.** Suppose that p∈D(Ω(T,n)). Node n is either a choice node or a natural node. If it is a choice node, then it follows immediately from Lemma 1 that p∈D(Ω(T,n′)). Now let n be a natural node.
Then by DSM, p∈CSM(Ω(T,n)). By Lemma 2, COpt(⋅)=CSM(⋅)=D(⋅) under coherent extension. Therefore, by Theorem 37 of Bradley (2017),
Vin(p)≥Vin(p′)∃i∈Z for all p′∈Ω(T,n). (1)
Suppose, for contradiction, that p∉D(Ω(T,n′)) for some n′∈N+(n). This implies that
¬[∀p′∈Ω(T,n′),∃i∈Z:Vin′(p(n′))≥Vin′(p′(n′))], i.e., that
∃p′∈Ω(T,n′):∀i∈Z,Vin′(p(n′))<Vin′(p′(n′)) .
Let z′[p∗] denote the continuation selected by plan p∗ upon reaching choice node n′.
Then we can re-write the above as Vin′(z′[p])<Vin′(z′[p′])∀i∈Z.
And by Preference Stability, Vin(z′[p])<Vin(z′[p′])∀i∈Z. (2)
By applying Material Planning to (1), we get, for all p′∈Ω(T,n),
∑jVin(zj[p])P(nj|p(n))≥∑jVin(zj[p′])P(nj|p′(n))∃i∈Z
and, by assuming Plan-Independent Probabilities,
∑jVin(zj[p])P(nj|n)≥∑jVin(zj[p′])P(nj|n)∃i∈Z. (3)
Let Z′ denote the set of completions according to which (3) holds.
Then Vn(zj[p])≥Vn(zj[p′])∀i∈Z′ for all j∈N|N+(n)|. That is because, if this failed to hold for any j, then plan p could be altered such that zj[p]=zj[p′]. But such Pareto improvements are unavailable since p is DSM at node n.
In particular, Vn(z′[p])≥Vn(z′[p′])∀i∈Z′ as z′∈{z1,...}. But since Z′⊆Z, condition (2) implies that Vin(z′[p])<Vin(z′[p′])∀i∈Z′, a contradiction. ■
**Proposition 4** *Strongly maximal sophisticated choice with splitting does not guarantee opportunism for Suzumura consistent agents.*
**Proof.** The proof is by counterexample. Consider a Suzumura consistent agent facing the following simple decision tree with preferences satisfying x≻y≻z and x⋈/z.
Figure 5.Proceeding via backward induction, the only permissible choice at node 1 is x. The agent then compares x and z at node 0. These are incomparable so the agent splits the problem and gets two partial solutions: x and z. The strongly maximal solutions are CSM({x,z})={x,z}. Although x is indeed permissible, the agent could incidentally end up with z despite having been able to reach y, a strictly preferred alternative. ■
**Remark.** Decision trees lack a natural assignment of the ‘default’ outcome; i.e., what the agent ‘starts out with’. In this case we can think of the agent as starting with z, and choosing whether to engage with a decision problem by going ‘up’ at node 0. Then we can claim that it is permissible, according to strongly maximal sophistication with splitting, for the agent to stay put at a strictly dispreferred node. The agent is therefore not opportunistic. It is worth noting that y is also impermissible according to planning-DSM as described above. The only permissible plans under DSM are CSM({x,y,z})={x}. But since x dominates y, a DSM agent is nevertheless opportunistic: it could not even incidentally pick a strictly dominated prospect.
**Lemma 4** *DSM satisfies Set Contraction.*
**Proof.** Recall Set Contraction: if α∈C(B) and α∈A⊆B then α∈C(A). Now let A,B be sets of plans in tree T. Suppose that α∈D(B) and α∈A⊆B. Then by DSM, α∈CSM(B). By Theorem 3 of Bradley (2015), CSM(⋅) satisfies Set Contraction, so
α∈CSM(A). (1)
By DSM, ∄α∗∈B:α∗∈CSM(Ω(T,n0))∌α. And since A⊆B, we also know that
∄α∗∈A:α∗∈CSM(Ω(T,n0))∌α. (2)
By DSM, (1)-(2) is equivalent to α∈D(A). ■
**Proposition 5 (Strong Dynamic Consistency Under Certainty via Sophistication)** *Assume certainty and Suzumura consistency. Then DSM-based backward induction with splitting (DSM-BIS) reaches a strongly maximal terminal node.*
**Proof.** We first establish that every strongly maximal terminal node is preserved under some split of DSM-BIS. Let X denote the terminal nodes of a tree and denote by m the number of final choice nodes. For any such node ni, its associated terminal nodes are N+(ni)⊆X. By Set Contraction, we know that for any x∈CSM(X):x∈N+(ni) we have x∈D(N+(ni)). And since ⋃iN+(ni)=X, this more generally implies that
∀x∈CSM(X),∃i∈Nm:x∈D(N+(ni)). (1)
Let ri=|D(N+(ni))|. Whenever ri>1 for some i∈Nm, the splitting procedure will induce at least ri separate trees. Each will initially preserve one element of the DSM set of terminal nodes following ni. We therefore know from (1) that, for any x∈CSM(X), there is some split which will initially preserve x. The splitting procedure is repeatedly nested, as needed, within each subtree created via DSM-BIS elimination, so by Set Contraction (Lemma 4), this likewise applies to all subsequent subtrees. With Nj+(⋅) denoting possible continuations under split j, we can now claim that
∀x∈CSM(X),∃j∈Nk:x∈D(Nj+(n0)).
Let Sj denote the partial solutions under split j. Then since Sj=D(Nj+(n0)), this establishes that ∀x∈CSM(X),∃j∈Nk:x∈Sj. And letting S=⋃jSj, we can state
∀x∈CSM(X),x∈S, i.e. CSM(X)⊆S. (2)
Because we also have S⊆X, Set Contraction implies that
CSM(X)⊆CSM(S). (3)
DSM-BIS then selects among D(S). Since D(⋅)⊆CSM(⋅), we know from (2) that
∀x∈D(X),x∈S.
So by Set Contraction, ∀x∈D(X),x∈D(S). Clearly, D(X)=CSM(X) since X are all the terminal nodes. Therefore ∀x∈CSM(X),x∈D(S). This establishes that
CSM(X)⊆D(S), (4)
i.e., that all strongly maximal terminal nodes are DSM at S. To show the other direction, suppose for contradiction that ∃x′∈D(S):x′∉CSM(X).
Since x′ is not a strongly maximal terminal node, we know by the decisiveness of strong maximality that ∃x∗∈X∖{x′}:x∗∈CSM(X). By (3)-(4), this x∗ must be a member of both CSM(S) and D(S)=CSM(X). So by condition (b) of DSM, x′∉D(S), a contradiction.
Therefore CSM(X)⊇D(S), which with (4) implies that D(S)=CSM(X). ■
**Proposition 7** *Assume no natural nodes. Then, DSM-based choice under awareness growth will remain arbitrary whenever the set of available prospects is fixed.*
**Proof.** Let Ω(T1,n0) denote the initially available prospects. Upon reaching n0 the agent’s awareness grows and it faces Ω(T2,n0) . Suppose Ω(T2,n0)⊆Ω(T1,n0) .
By Set Contraction, CSM(Ω(T2,n0))⊆CSM(Ω(T1,n0)).
Choice was arbitrary within CSM(Ω(T1,n0)), and will be arbitrary in CSM(Ω(T2,n0)).
Therefore, we know that for any x,y∈Ω(T1,n0) between which choice was arbitrary, choice will remain arbitrary between x and y whenever x,y∈Ω(T2,n0). ■
**Proposition 8** *Assume transitive preferences and no natural nodes. Then, under all possible forms of awareness growth, naive choice via G-DSM is opportunistic.*
**Proof.** Let X denote the set of all prospects over which the agent’s preference relation is defined (i.e., candidates for terminal nodes). Let X1 denote the terminal nodes reachable in the initial tree T1; X2 the terminal nodes reachable (once awareness grows) in the new tree T2 at the previously chosen node n0; and XG the set of all terminal nodes (reachable or not) in the grand tree TG.
Since there are no natural nodes, X1=Ω(T1,n0) and X2=Ω(T2,n0)=Ω(T,n0) according to the agent’s doxastic state at those nodes. By Theorem 1 of Bradley (2015) strong maximality and maximality coincide under transitivity. We can therefore use maximality going forward.
Node n0 is, by construction, chosen via DSM in T1. Therefore, by Proposition 2, DSM guarantees that p(n0)∈CMax(Ω(T1,n0)) for any p∈Ω(T1,n0) consistent with S(n0).
(Note: with some abuse of notation, I will apply choice functions and preference relations to terminal nodes. This should be interpreted as attitudes towards their associated prospects.)
Awareness then grows and the agent faces T2. Consider two cases.
Case 1: (X1∪X2)∩CMax(XG)=∅. Here the best nodes are not the initially terminal nodes (nor are they accessible after reaching n0). Therefore, no strongly maximal prospect in the grand tree was available in the initial tree. So, opportunism is satisfied automatically.
Case 2: (X1∪X2)∩CMax(XG)≠∅. This is the situation in which the best nodes are either in the initial tree or the new tree. Notice that XG=X1∪X2∪X3 for some set X3 (the now-inaccessible new terminal nodes). Sub-cases:
If X1∩CMax(XG)≠∅, then by Set Expansion, n0∈CMax(XG). And so for any x∈XG, x⊁n0. Since n0∈X2, by Set Contraction, n0∈CMax(X2). For any y∈CMax(X2), y⊀n0. So, for all y∈CMax(X2), x⊁y for any x∈XG. Whatever is in fact chosen after awareness growth, it will not be strictly dominated by anything in the grand tree.
If X1∩CMax(XG)=∅, then necessarily X2∩CMax(XG)≠∅. Then for some x∈CMax(XG), x∈X2. By Set Contraction, x∈CMax(X2) for all such x. Hence CMax(X2)∩CMax(XG)≠∅.
Since ∃x∈CMax(X2)∩CMax(XG), by transitivity, we know that for any y∈CMax(XG) and z∈CMax(X2), y⊁z. So, whatever is in fact chosen after awareness growth, it will not be strictly dominated by anything in the grand tree.
Therefore, CMax(X2)⊆CMax(XG) . So only maximal prospects in the grand tree are reached. ■
**Proposition 9 (Non-Trammelling I)** *Assume transitive preferences and no natural nodes. Then, under all possible forms of awareness growth, globally-DSM choice will remain arbitrary whenever the available prospects are (i) mutually incomparable and (ii) not strictly dominated by any prospect.*
**Proof.** Let X denote the set of all prospects over which the agent’s preference relation is defined (i.e., candidates for terminal nodes). Our agent reaches node n0, its awareness grows, and faces mutually incomparable prospects Ω(T2,n0)⊆X (there are no natural nodes). Necessarily, then, CSM(Ω(T2,n0))=Ω(T2,n0).
Suppose, also, that these are not strictly dominated: ∀p∈Ω(T2,n0),∄q∈X:q≻p . (∗)
Then given transitivity, Ω(T2,n0)⊆CSM(X). Before awareness growth, the available prospects were Ω(T1,n0)⊆X.
For globally-DSM choice between incomparables to be non-arbitrary, it must be the case that G(Ω(T2,n0))⊂CSM(Ω(T2,n0)), i.e., that some prospects in Ω(T2,n0) are more choiceworthy than others at the grand tree T’s initial node n0. Since there are no natural nodes, this is equivalent to saying that
p∈CSM(Ω(T,n0))∌p′ for some p,p′∈Ω(T2,n0).
Given transitivity, this implies that ∃q∈Ω(T,n0):q≻p′. Since Ω(T,n0)⊆X, this contradicts (∗). ■
**Corollary 2 (Bounded Trammelling II)** *Assume that comparability and preference are transitive and that there are no natural nodes. Then, whenever*k*class-optimal prospects are available, choice will be arbitrary between at least*k*prospects.*
**Proof.** Given transitivity, [x]⋈≜{y∈X:y⋈x} is an equivalence class.
We first show that CMax(X)=⋃x∈XCOpt([x]⋈).
Suppose a∈CMax(X). Then ∄b∈X:b≻a. Hence ∄c∈[a]⋈:c≻a. Given that all elements in a class are comparable, a⪰b∀b∈[a]⋈, i.e., a∈COpt([a]⋈). Therefore a∈⋃x∈XCOpt([x]⋈), establishing CMax(X)⊆⋃x∈XCOpt([x]⋈).
Suppose a∈⋃x∈XCOpt([x]⋈). Then since ether [x]⋈=[y]⋈ or [x]⋈∩[y]⋈=∅ for all x,y∈X, we know a∈COpt([a]⋈). Hence ∀b∈X:b⋈a,a⪰b. This implies b⊁a.
And ∀c∈X:c⋈/a,c⊁a. Therefore ∀x∈X,x⊁a meaning a∈CMax(X).
So ⋃x∈XCOpt([x]⋈)⊆CMax(X) . This establishes that the maximal set and the class-optimal prospects coincide.
Next, suppose A⊆CMax(X) and A⊆Ω(T,n) at some node n in deterministic tree T.
Let k=|A| and suppose, for contradiction, that choice at n is arbitrary between fewer than k prospects. Then |G(Ω(T,n))|<k.
Under preference transitivity, this implies that ∃a∈A,∃b∈Ω(T,n):b≻a. (∗)
But since a∈CMax(X) for all a∈A, and Ω(T,n)⊆X, Set Contraction implies that a∈CMax(Ω(T,n)). Hence ∄b∈Ω(T,n):b≻a, contradicting (∗). ■
1. **[^](#fnref3m1r2zyk7s5)**Nate Soares has also [remarked](https://www.alignmentawards.com/winners#background), regarding Thornley’s proposal, that the consequences of violating the von Neumann-Morgenstern (vNM) axioms should be explored. This is one such exploration.
Though I will note that, in any case, the vNM axiomatisation is plausibly not the relevant baseline here. Instead it’d be the subjective representation theorems from Savage, Bolker-Jeffrey, Joyce, etc. And as we’ll see below, the Jeffrey framework has been usefully extended to the case of incompleteness.
2. **[^](#fnreftinxlqvh5gl)**Though see Bales ([2023](https://link.springer.com/article/10.1007/s11098-023-02023-4)) and Laibson & Yariv ([2007](http://lyariv.mycpanel.princeton.edu/papers/DutchBooks.pdf)) for reasons to be sceptical of unqualified versions of this claim.
3. **[^](#fnrefj21kg28mi9h)**Opportunism implies unexploitability, so invulnerability is in fact equivalent to opportunism.
4. **[^](#fnref0luewbcqek9)**I will often refer to various properties of choice functions. For those unfamiliar with these, I suggest [Richard Bradley’s introductory text on decision theory](https://personal.lse.ac.uk/bradleyr/pdf/Handbook%20-%20Decision%20theory%20(revised).pdf), section 4.2.
5. **[^](#fnref1qvo9b7x78t)**Notation:
⪰ (weak preference) is a reflexive binary relation on a set of prospects.
∼ (indifference) is the symmetric part of ⪰.
≻ (strict preference) is the anti-symmetric part of ⪰.
⋈ (comparability) is a relation satisfying α⋈β iff α⪰β or β⪰α.
A slash through a relation denotes its negation (e.g., α⋈/β iff α and β are incomparable).
6. **[^](#fnrefm5fwt63x78s)**C(⋅) is *decisive* iff C(X) is always nonempty (when X is nonempty).
7. **[^](#fnrefnh3nxfrjlg)**The conditional ‘→’ is operationalised below in Definition 3. It will differ from the Bradley conditional used in Huttegger and Rothfus ([2021](http://gerardrothfus.com/wp-content/uploads/2021/05/Huttegger-Rothfus2021_Article_BradleyConditionalsAndDynamicC.pdf)).
8. **[^](#fnrefhzkltiisi9d)**Explaining the conditions:
*Non-triviality*: ∃α∈Ω:α⪰⊤, where ⊤ is the tautology.
⪰ has a *minimal coherent extension*, ⪰′, on Ω iff (i) α⪰β⟹α⪰′β, (ii) α≻β⟹α≻′β, and (iii) ⪰′ is complete and transitive. Indeed, ⪰ is strongly acyclic by Theorem 2 of Bradley (2015).
*Impartiality*: Suppose α∼β and for some γ≁α such that αγ=⊥=βγ, we have (α∨γ)∼(β∨γ). Then for all such γ, (α∨γ)∼(β∨γ).
*Continuity*: Let {α1,α2,...} be a countable set such that α1⊨α2⊨.... Suppose β⪰αi and αi⪰γ for all i. Then β⪰⋃i{αi} and ⋃i{αi}⪰γ.
A *desirability measure* V satisfies V(⊤)=0 (Normality) and ∀α,β∈Ω∖{⊥}:α∧β=⊥,α⪰β⟺α⪰(α∨β)⪰β (Averaging).
9. **[^](#fnref55pbejubw92)**In section 3.2. I consider issues related to what happens if it can only plan ahead to a limited extent (or not at all).
10. **[^](#fnrefvh83xvdhpj)**Note that the notion of ‘certainty’ used here rules out Sequential Transparent Newcomb ([Rothfus 2020b](https://link.springer.com/article/10.1007/s11098-020-01415-0)) with a perfect predictor. The results in section 2.4. do not.
11. **[^](#fnrefd70xugugide)**C(⋅) satisfies Set Expansion iff whenever B⊆A∋α, then α,β∈C(B) and β∈C(A) implies α∈C(A).
12. **[^](#fnrefj0uv9nirha)**Note that, below, I will consider the case of unawareness. For discussion of resolute choice under awareness growth, I’ll refer to Macé et al ([2023](https://www.lesswrong.com/posts/uPWDwFJnxLaDiyv4M/open-minded-updatelessness)).
13. **[^](#fnrefgj4o2jvnus)**The latter is sometimes known as *state space uncertainty*; see Bradley and Drechsler ([2014](http://eprints.lse.ac.uk/52609/1/__lse.ac.uk_storage_LIBRARY_Secondary_libfile_shared_repository_Content_Bradley%2C%20R_Types%20uncertainty_Bradley_Types%20uncertainty_2014.pdf)).
14. **[^](#fnrefufpnjx93gqn)**In the general case of uncertainty, the agent is able to only implement the strongly maximal feasible plans (Proposition 6).
15. **[^](#fnrefr2zvmaagbir)**Here’s an example. Suppose the agent can choose between going ‘up’ and ‘down’. If it goes ‘up’, it gets A. If it goes ‘down’, it gets to choose between A+ and B. (This is the structure of a single-sweetening money pump.) If the agent is *aware* of the tree structure, then naive DSM choice lets it go for either ‘down-A+’ or ‘down-B’.
If the agent is *unaware* and thinks that ‘down’ leads to B only, then naive DSM choice lets it initially go ‘up’ (A) or ‘down’ (B). If it goes ‘down’, it can pick arbitrarily and opportunistically between A+ and B once its awareness grows. But if it goes ‘up’ it will realise, after awareness growth, that it could’ve done better. The problem in the latter case, however, is centrally a result of unawareness, not of incompleteness. An agent with complete preferences satisfying A+≻A∼B will also choose arbitrarily initially and regret their decision if they went ‘up’.
16. **[^](#fnreffsy0q5xojbp)**Example 5 is a violation of opportunism under awareness growth. This is because the agent fails to arrive at a maximal node even though the growth of the tree did not bring about any prospects that are strictly preferred to the previously present ones.
17. **[^](#fnreflvurdlxhnn)**This requirement is not in general equivalent to the transitivity of weak preference. |
b42209a2-560e-45cb-8d46-b3583e056348 | trentmkelly/LessWrong-43k | LessWrong | Not all communication is manipulation: Chaperones don't manipulate proteins
Epistemic status: Original work, explanation of a mental model that I developed for a few years that brings together knowledge from existing fields.
Is all communication manipulation? I hear this sentiment frequently expressed and want to explain in this article that there’s nonmanipulative communication by using protein folding as an intuition pump.
It is common knowledge within molecular biology that proteins fold into their native state. That native state is the folded shape that possesses a minimum of free energy. Finding global minima is however a hard problem. For bigger proteins, it's at the time of writing - still impossible to calculate the shape.
Even in vivo protein folding is a hard problem. Cells are densely packed with many different molecules that push against each other. Frequently, resources are wasted when a protein misfolds into a shape that's not its native state.
Nature is clever and developed a way to help proteins fold into their native state. Cells produce chaperones. A chaperone surrounds an unfolded protein to protect it from outside influences to help the protein to fold into its native state. A chaperone doesn't need to know the native state of a protein to help the protein fold into that state. Instead of manipulating the protein like a sculpture, it holds space for a protein to be safe from outside influences, while it folds into its native form.
This allows a chaperone that works in an uncomplicated way to achieve a result that very complex machine learning algorithms currently don't achieve. The machine learning algorithm tries to figure out the best way for the protein to fold while the chaperone just lets the protein find this way by itself.
The psychologist Carl Rogers advocated that good psychologists act in the same way nonmanipulative with their patients. In his view, it's not the job of the therapist to solve the problem of their patient by manipulating the patient into a healthy form. A good therapist isn’t like ta scul |
f5ba37cb-699b-464f-b1ef-aaae06b896a6 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post2038
Credit to Adam Shimi, Alex Flint, and Rob Miles for discussions, counterexamples, and general input to the ideas here. Quick recap for anyone who didn’t read the hundred-page Eliciting Latent Knowledge document : We have a diamond in a vault, with a bunch of automated theft-defenses. We train a predictor to take in the vault’s video-stream and a plan for the vault’s actuators, and predict future video frames. We train a planner to find plans which the predictor predicts will end with the video feed still showing a diamond in the vault. We want some way for a human to probe the latent knowledge of the predictor, e.g. to check if the predictor expects a screen showing a diamond will be placed in front of the camera. The ELK document largely discussed the human running natural-language queries on the predictor’s latent knowledge, but we’re going to mostly ignore that part, because it’s harder than everything else about the problem combined and I don’t think it’s actually that central. This post will focus mainly on the specific question of whether there’s a screen in front of the camera in the diamond problem. I expect that the approaches and counterexamples here extend to more general problems of detecting when a plan is “messing with the sensors”, i.e. doing things which decouple observables from the parts of the world we’re actually interested in. Revisiting Holdout Data The first approach we’ll discuss is basically “just use holdout data”, but with two major changes compared to previous suggestions . First, this proposal does not involve training a natural-language “reporter” at all. (IMO the whole idea of training an honest natural language reporter is just as hard as training an honest natural language question-answerer in general, for basically the same reasons; having a predictor’s internal computations on hand does not make it easier in any important way.) Using holdout data to train an honest reporter would be pretty hopeless, but that’s not what we’re trying to do here. We’re just trying to tell whether the predictor expects sensor-tampering from the planner. For that purpose, there’s an obvious way to detect sensor tampering: cross-check the (predicted) holdout data with the (predicted) non-holdout sensor data. If they don’t match, that’s evidence of (predicted) sensor tampering. The danger, of course, is that the planner will either find and tamper with all of our sensors, or fake sensor readings. That brings us to the second major change: while the planner will inevitably be incentivized to tamper with the non-holdout sensors, the obvious setup for the planner yields no incentive to trick a human; there’s no incentive to be particularly subtle about the sensor-tampering, and in particular no incentive to fool the holdout sensors. Formal Setup Some variable names: O b s ≤ n = ( O b s 1 , … , O b s n ) are non-holdout observations from the sensors before the plan begins A ∗ are the actions constituting the plan O b s > n = ( O b s n + 1 , … ) are non-holdout observations from the sensors after the plan H o l d = ( H o l d 1 , … ) are holdout observations (held out from the planner during planning, not from the predictor during training) We’ll assume that the predictor is a generative model for all observations including holdout, so it effectively encodes a distribution P [ O b s , H o l d | d o ( A ∗ ) ] . We’ll also assume that the actions have no causal influence on observations before the plan begins, and that the predictor “knows this” (i.e. P [ O b s ≤ n | d o ( A ∗ ) ] = P [ O b s ≤ n ] ). We’ll assume that the planner maximizes some expected utility function of the later non-holdout observations given the earlier non-holdout observations: A^* = argmax_A E[u(Obs_{>n})| do(A^* = A), Obs_{\leq n}] … where the distribution used is the predictor’s model. Key thing to notice: the holdout observations are not explicitly involved at all in the planner’s evaluation. If we have one plan A 1 which tampers with only the non-holdout sensors to fake the appearance of the diamond, and another plan A 2 which tampers with all the sensors to fake the appearance of the diamond, the planner is indifferent between A 1 and A 2 . So long as the non-holdout sensor readings are the same, the planner has no incentive to also fake the holdout sensor readings. Furthermore, if there is ever any tradeoff whatsoever between faking the holdout sensor readings and anything the planner cares about, then it will not fake the holdout sensor readings. In particular, if faking the holdout sensor readings requires resources which could instead be spent better/more reliably faking the non-holdout readings, then the AI will not fake the holdout readings. But What If There’s An External Adversary? The holdout sensor approach is probably sufficient to detect sensor-tampering by the planner itself . But an external adversary (like a thief, for instance) might want to fool the human overseer, so they might actually want to screen all the sensors. What can we do against a powerful thief? And, while we’re at it, what can’t we do? Thief With Limited Knowledge of Diamond If the thief has limited information about the diamond at the time that the screen goes up , then we can use a variant of the holdout approach. Imagine, for instance, that the diamond has a tiny imperfection on it which the thief does not know about. Then, when the thief puts up the screen, the screen will presumably show a diamond without that imperfection. By comparing it to sensor readings from before the screen went up, we should be able to detect the change. More generally, we can see how much details of the earlier-diamond correlate with details of the later-diamond (under the predictor's distribution), to detect whether the predictor expects a screen to go up. (I call this a variant of the holdout approach because we’re effectively using earlier sensor readings as the holdout data. If we’re also worried about the planner messing with the sensors, we can combine this directly with our earlier approach.) One potential counterexample for this: if the thief has put up a screen even before the episode began, then we may not have any sensor readings of the real diamond. In this case, we’re “starting in the Matrix”, and the thief can have the screens perfectly emulate whatever would happen in the real world; thus the “start in the Matrix” case cannot be handled even in principle. We need some real sensor readings. The more interesting counterexample is, of course, that the thief has arbitrarily perfect knowledge of the diamond - possibly by using the real diamond to generate the screened sensor readings. More on that later. But first, a barrier. Thief With Access To Predictor/Planner/Reporter System If the thief has access to the predictor/planner/reporter system, then we can apply a good old diagonalization argument. The thief can run the predictor, see what it predicts, and do something else. Or the thief can run the reporter, see whether it says that the diamond is stolen in the predictor’s scenario, and do the opposite. Takeaway: if the thief has full system access, then we may not be able to solve ELK even in principle (though this does depend somewhat on exactly what guarantees we want). So we probably want to just assume that any potential thief does not have access to the predictor/planner/reporter system directly. (For the full-blown alignment problem I would not endorse this assumption, but ELK is intended to be narrow.) Thief With Access To The Real Diamond That leaves a gap between cases in which the thief has access to the predictor/planner/reporter (which run into the diagonalization barrier), and cases in which the thief doesn’t even have detailed information about the diamond (in which case we can solve the problem). In between those, the thief has arbitrarily good information about the diamond, but does not have access to the predictor/planner/reporter. For this case, I do not have a solution, but I do expect that it’s solvable-in-principle. The main difficulty in these cases is that the P [ O b s | d o ( A ∗ ) ] may be exactly the same between action-plans A ∗ which result in the diamond being stolen, and action plans which do not. The thief has enough information to make the predicted sensor readings completely identical between the two cases. The reason the problem might still be solvable-in-principle is that the actions A ∗ still give us information about whether the diamond was stolen. But we somehow have to extract that information in a way which does not require any difference whatsoever between the observation-distributions in the two cases. Furthermore, bear in mind that the predictor may not even calculate internally whether the diamond is actually stolen, in this case. After all, it does not need that information in order to predict the sensor readings; the readings will be identical either way. Even then, I expect it’s possible-in-principle to solve the problem, but the solution starts to look less like “extract the predictor’s latent knowledge” and more like “extract the predictor’s latent world-model and then calculate what latent predictions that world-model would make”. I speculate a little bit about that in another post . |
f3c6b6d5-372a-43e6-a36a-40feb372c0de | trentmkelly/LessWrong-43k | LessWrong | Maximizing Cost-effectiveness via Critical Inquiry
I am cross-posting this GiveWell Blog post, a followup to an earlier cross-post I made. Here I provide a slightly more fleshed-out model that helps clarify the implications of Bayesian adjustments to cost-effectiveness estimates. It illustrates how it can be rational to take a "threshold" approach to cost-effectiveness, asking that actions/donations meet a minimum bar for estimated cost-effectiveness but otherwise focusing on robustness of evidence rather than magnitude of estimated impact.
We've recently been writing about the shortcomings of formal cost-effectiveness estimation (i.e., trying to estimate how much good, as measured in lives saved, DALYs or other units, is accomplished per dollar spent). After conceptually arguing that cost-effectiveness estimates can't be taken literally when they are not robust, we found major problems in one of the most prominent sources of cost-effectiveness estimates for aid, and generalized from these problems to discuss major hurdles to usefulness faced by the endeavor of formal cost-effectiveness estimation.
Despite these misgivings, we would be determined to make cost-effectiveness estimates work, if we thought this were the only way to figure out how to allocate resources for maximal impact. But we don't. This post argues that when information quality is poor, the best way to maximize cost-effectiveness is to examine charities from as many different angles as possible - looking for ways in which their stories can be checked against reality - and support the charities that have a combination of reasonably high estimated cost-effectiveness and maximally robust evidence. This is the approach GiveWell has taken since our inception, and it is more similar to investigative journalism or early-stage research (other domains in which people look for surprising but valid claims in low-information environments) than to formal estimation of numerical quantities.
The rest of this post
* Conceptually illustrates (using the mat |
9b905aa2-9ad4-45a9-bbe9-5c1e06f5c4be | StampyAI/alignment-research-dataset/lesswrong | LessWrong | How might cryptocurrencies affect AGI timelines?
Summary: Deflation (or so I heard) is considered harmful because it stifles growth. Central banks have been fighting it to keep economies healthy. Cryptocurrencies can be designed to be deflationary. If the market capitalization of cryptocurrencies becomes big, central banks may have no way to contain the deflation. This may be catastrophic in many ways – but might it also slow AGI development and buy safety research more time?
I’ve been wondering what the “crypto endgame” may look like. Crypto may just turn out to have been a bubble or may continue for decades unchanged at its current limited level of significance. But we’re probably sufficiently prepared for those scenarios already.
Instead the crypto market capitalization might take off at a superlinear pace. Bitcoin is currently on [rank 14](https://fiatmarketcap.com/) among the world’s currencies and holds 0.7% of the aggregate value of all of them. [All cryptocurrencies together](https://www.coingecko.com/) hold about 1.1% of the world’s monetary value.
The crypto space is rife with network effects. Or it is one huge network effect. These are wont to show superlinear growth if they show any growth. Superlinear growth often doesn’t last long enough to be significant, but in other cases it does. So maybe crypto’s market capitalization will grow by an order of magnitude or more within a few years, so before AGI is developed. That might happen if more more companies follow the examples of MicroStrategy or Tesla and invest their spare cash into a cryptocurrency. Phil Bonello, director of research at Grayscale, also [mentions a scenario](https://www.youtube.com/watch?v=MK47miQCt9E) in which governments start buying crypto and thereby force all other governments to follow suit. I don’t know the reasoning behind that, so I don’t know how forcible that effect will be, but it doesn’t seem to be hard to come up with plausible growth scenarios. (At least if you’re incentivized to come up with them the way Grayscale is.)
Some cryptocurrencies, notably Bitcoin, are designed to be deflationary. If companies across the board find that they generate greater profits by investing all their spare cash into deflationary cryptocurrencies rather than reinvesting into their own production, innovation may stall. (Isn’t that one of the typical reasons why deflation is considered harmful?)
So that brings me to my questions:
1. Conditional on the cryptocurrency market growing by another 1–2 orders of magnitude in the next five years or so, what happens to the global financial system and the world economy? (Might this future look like a cyberpunk dystopia where a rich elite retires on their capital gains, and anyone who didn’t enter the period with enough capital has very few avenues to sell their labor and so lives in poverty?)
2. And might this stalled innovation also carry over to AGI in that technology companies will find it more profitable to invest into cryptocurrencies rather than compute or staff, thereby buying safety research a few more years or decades to solve ethics, decision theory, and international coordination? |
2960799c-44e1-4fd0-b045-4dbf4527e361 | trentmkelly/LessWrong-43k | LessWrong | How I talk to those above me
Now and then, at work, we’ll have a CEO on from the megacorp that owns the company I work at. It’s a Zoom meeting with like 300 people, the guy is usually giving a speech that is harmless and nice (if a bit banal), and I’ll turn on my camera and ask a question about something that I feel is not going well.
Invariably, within 10 minutes, I receive messages from coworkers congratulating me on my boldness. You asked him? That question? So brave!
This is always a little strange to me. Of course I feel free to question the CEO!
What really makes me nervous is asking cutting questions to people newer to the company than me, who have less experience, and less position. If I screw up how I ask them a question, and am a little too thoughtless or rude, or imagine I was, I agonize about it for hours, and sometimes days.
This is because of how I view hierarchy. By hierarchy I mean: In any organization, be it a company, a friend group, or a board game meetup, there are people above you and there are people below you. These relative positions depend on differences in experience, skill, social ability, and how long each of you has been in the group. Your position relative to the person you’re talking to determines the kinds of things you can say to them, and how you can say them.
I think the people congratulating me on how bold I was in questioning a CEO have a rule like “In a corporation, you should not question, or seem to disagree with, those above you.” But I have long felt that the opposite rule is the obviously correct one: “Feel free to question those above you without worrying too much about how you come off, but be extremely careful when questioning those below you.” (In group settings. In private, just the two of you, many of these rules disappear).
Maybe it’s easier to see what I mean in a different example. Consider (male) friend groups. When I hang out with my friends, we talk shit to each other constantly. So imagine me, at the bar, in such a group, saying t |
8e0c79ce-dcdd-4c2e-83d6-9c78f3e881ae | trentmkelly/LessWrong-43k | LessWrong | Nothing is Forbidden, but Some Things are Good
I recently wrote about my vegetarianism to casually document it for friends, and it proved to be more popular than anything else I’ve written lately. My guess is that it was popular because it was about something highly relatable that many people think about already — food choices — but maybe it was engaging because it was ultimately about morality. Since I don’t have much more to say about food, I’ll say some more things on morality to see if it proves similarly engaging to folks.
On my moral thinking I said this:
> As you may have noticed, I became a vegetarian via preference utilitarianism, but stay a vegetarian to signal virtue. That would be pretty confused moral reasoning, except properly I don’t think morality is a category of thing that exists in the world and instead is an illusion created by seeing the world through a frame that does not include system relationships. But I do recognize preferences, my own preferences include a preference for the maximization of the preferences of others all else equal, and as a result I think in a way that generally aligns with the moral theory of preference utilitarianism, but if I had different preferences about the preferences of others I could just as easily be a deontologist or virtue theorist in terms of morality, so I see no problem in the contradictions that result from flatting my thinking into terms of morality.
Echoing this sentiment, a few days later this pithy line showed up in my Facebook feed:
The rules say we have to use consequentialism, but good people are deontologists, and virtue ethics is what actually works.
Depending on your thinking on morality it may sound to you like I’m being evasive and Eli is being too clever, but I think both of us are trying to convey that something more complex is going on that ends up doing weird things if you squeeze it into the abstraction of morality. Eli wrote extensively on this topic back in 2008, but you may not like his writing, be unwilling to sift through it a |
1a624711-be6e-4e54-bc86-d6b2ac7650c5 | trentmkelly/LessWrong-43k | LessWrong | Alignment Newsletter #42
Cooperative IRL as a definition of human-AI group rationality, and an empirical evaluation of theory of mind vs. model learning in HRI
Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter.
Highlights
AI Alignment Podcast: Cooperative Inverse Reinforcement Learning (Lucas Perry and Dylan Hadfield-Menell) (summarized by Richard): Dylan puts forward his conception of Cooperative Inverse Reinforcement Learning as a definition of what it means for a human-AI system to be rational, given the information bottleneck between a human's preferences and an AI's observations. He notes that there are some clear mismatches between this problem and reality, such as the CIRL assumption that humans have static preferences, and how fuzzy the abstraction of "rational agents with utility functions" becomes in the context of agents with bounded rationality. Nevertheless, he claims that this is a useful unifying framework for thinking about AI safety.
Dylan argues that the process by which a robot learns to accomplish tasks is best described not just as maximising an objective function but instead in a way which includes the system designer who selects and modifies the optimisation algorithms, hyperparameters, etc. In fact, he claims, it doesn't make sense to talk about how well a system is doing without talking about the way in which it was instructed and the type of information it got. In CIRL, this is modeled via the combination of a "teaching strategy" and a "learning strategy". The former can take many forms: providing rankings of options, or demonstrations, or binary comparisons, etc. Dylan also mentions an extension of this in which the teacher needs to learn their own values over time. This is useful for us because we don't yet understand the normative processes by which human societies come to moral judgements, or how to integrate machines into that process.
On th |
808cc407-4c15-4e6d-bf4d-41eee72c7ab8 | trentmkelly/LessWrong-43k | LessWrong | Research Adenda: Modelling Trajectories of Language Models
Apply to work on this project with me at AI Safety Camp 2024 before 1st December 2023.
What are the possible outcomes?
Summary
Rather than asking “What next token will the Language Model Predict?” or “What next action will an RL agent take?”, I think it is important to be able to model the longer-term behaviour of models, rather than just the immediate next token or action. I think there likely exist parameter- and compute-efficient ways to summarise what kinds of longer-term trajectories/outputs a model might output given an input and its activations. The aim of this project would be to conceptually develop ideas, and to practically build a method to summarise the possible completion trajectories of a language model, and understand how this might or might not generalise to various edge cases.
The non-summary
Motivation for the Project
There is a large amount of effort going into understanding and interpreting the weights and activations of Language Models. Much of this is focused on directly looking at the logits for next token predictions. While I think one can learn a lot about models this way, I think there are potentially more ways one can understand behaviour of models. Another method for investigation is to sample some possible outputs, but this is expensive and time-consuming, and it can be difficult to inspect all of the outputs.
I think that it should be possible to get sufficient information from understanding model behaviour and outputs in a much more condensed and compute-efficient way, such that one could iterate through different interpretability strategies, and understand more high-level insights about the models. In addition, I think that being able to get a broader understanding of what kinds of outputs a model might likely output would make it easier to monitor models, and prevent potential trajectories we might not like.
More explicitly:
* “Machine Unlearning”/ablation experiments is one of the key ways of trying to have better met |
925662c9-a657-4ac6-8ecf-191d5a094759 | trentmkelly/LessWrong-43k | LessWrong | A mechanistic model of meditation
Meditation has been claimed to have all kinds of transformative effects on the psyche, such as improving concentration ability, healing trauma, cleaning up delusions, allowing one to track their subconscious strategies, and making one’s nervous system more efficient. However, an explanation for why and how exactly this would happen has typically been lacking. This makes people reasonably skeptical of such claims.
In this post, I want to offer an explanation for one kind of a mechanism: meditation increasing the degree of a person’s introspective awareness, and thus leading to increasing psychological unity as internal conflicts are detected and resolved.
Note that this post does not discuss “enlightenment”. That is a related but separate topic. It is possible to pursue meditation mainly for its ordinary psychological benefits while being uninterested in enlightenment, and vice versa.
What is introspective awareness?
In an earlier post on introspective awareness, I distinguished between being aware of something, and being aware of having been aware of something. My example involved that of a robot whose consciousness contains one mental object at a time, and which is aware of different things at different times:
> Robot’s thought at time 1: It’s raining outside
> Robot’s thought at time 2: Battery low
> Robot’s thought at time 3: Technological unemployment protestors are outside
> Robot’s thought at time 4: Battery low
> Robot’s thought at time 5: I’m now recharging my battery
At times 2-5, the robot has no awareness of the fact that it was thinking about rain at time 1. As soon as something else captures its attention, it has no idea of this earlier conscious content - unless a particular subsystem happens to record the fact, and can later re-present the content in an appropriately tagged form:
> Time 6: At time 1, there was the thought that [It’s raining outside]
I said that at time 6, the robot had a moment of introspective awareness: a mental object c |
573a025d-5ba1-4e02-bd98-66adc4a96ab0 | trentmkelly/LessWrong-43k | LessWrong | AI as Super-Demagogue
Sam Altman recently said:
> i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes
I believe that he is absolutely right. Superhuman persuasion can be achieved by having LLMs consistently apply existing capabilities to known persuasion techniques. Some of these techniques are already proven to work at scale. Others have demonstrated effectiveness, and can more easily be applied at scale by AI. This makes superhuman persuasion a straightforward technology proposition.
I will look at this from the point of view of how AI can enable someone possessing the positions, skills, and desires necessary to attempt to create an authoritarian regime. In other words, I am describing something that is a near-term possibility without major advances in AI. Major advances seem likely, and would only make this picture worse.
I will also focus on techniques whose effectiveness has been proven by repeated human success. I will focus most of all on dictators and demagogues. Because he is so familiar, I will cite Donald Trump as an expert practitioner. This is not support for, or criticism of, him. I've aimed to have this read equally well whether or not you like him.
The information that I'm presenting is not new. Though it has come together for me as part of thinking about my blog. And I decided to post this as a response to, We are already in a persuasion-transformed world and must take precautions.
Talk to System 1
Dictators want a mass audience who is emotionally aligned with them. These people should want the dictator to be right. And ideally should find it painful to question the dictator. The result is followers who can't be convinced by rational argument.
This requires conditioning System 1. So you want to speak in a way that System 1 responds to, but doesn't activate System 2. Use your speeches to deliver a key emotional message over and over again.
The necessary verbal patter |
942bbffb-e52a-4f54-976f-90641eaab3b0 | trentmkelly/LessWrong-43k | LessWrong | Suspected reason that kids usually hate vegetables
Here is a common vegetable preparation method in U.S. suburban homes.
Step 1: cut it up
Step 2: boil them in water until the flavor and texture is gone.
Step 3: Serve them without any kind of seasoning, or mixed with another food that could provide flavor and texture. Alternatively, put salt and pepper on them so that all vegetables just taste like salt and pepper.
Step 4: Tell the kids that eating a pile of bland mush with each meal is needed for being healthy.
In high school, me and several other people made the surprising discovery that if you eat vegetables raw, they actually taste alright, and some (like celery) even taste good enough to just snack on plain. Plus, there are things like stir-fry that make vegetables taste great.
I know this was common where I grew up. Do enough other people have similar experiences that raising knowledge of this is a great way to increase healthy eating?
Edit: after viewing the comments, this does not seem like the main reason. |
b9847442-e41b-437a-8be3-43fe4384be5d | trentmkelly/LessWrong-43k | LessWrong | Conclusion and Bibliography for "Understanding the diffusion of large language models"
This post is one part of the sequence Understanding the diffusion of large language models. As context for this post, I strongly recommend reading at least the 5-minute summary of the sequence.
Conclusion
In this sequence I presented key findings from case studies on the diffusion of eight language models that are similar to GPT-3. The phenomenon of diffusion has broad relevance to risks from TAI:
1. The diffusion of AI technology affects when TAI will be developed, and which actors will lead AI development by what margin. This in turn affects how safe the TAI systems are, how the systems are used, and what the state of global politics and economics is like when the systems are used.
2. Diffusion can have benefits, such as helping less-resourced actors to scrutinize leading AI developers, and supporting AI alignment research outside of leading industry AI labs.
GPT-3-like models are quite a specific domain, and may seem far from TAI. Nonetheless, I centered my research on case studies of GPT-3-like models because I think they are relatively informative about how diffusion will impact TAI development. In particular:
1. The way that diffusion works today (in broad terms) might persist until the development of TAI, especially if TAI is developed relatively soon (e.g., in the next 10 years).
2. TAI systems (or components of them) might resemble today’s best-performing language models, especially if the scaling hypothesis is true. So the implications of diffusion related to such models may be similar to the implications of diffusion related to transformative AI systems.
3. Even if a lot changes between now and TAI, the history of diffusion improves our understanding of what could happen.
My research has strong limitations, including that:
1. Much of the data from my case studies is highly uncertain, with quantitative estimates often spanning an order of magnitude.
2. I often generalize from a small set of case studies in a narrow domain. Some of my conclus |
bd4b840f-e8a6-45e4-b518-d2ae87a1d7e0 | trentmkelly/LessWrong-43k | LessWrong | (solved) how do i find others' shortform posts?
edit: thanks :) |
5409d472-fb7f-467c-b1b9-213d5e80a682 | trentmkelly/LessWrong-43k | LessWrong | If you wish to make an apple pie, you must first become dictator of the universe
[Cross-posted from The Roots of Progress. Written for a general audience. LW readers may find the first couple of sections to be familiar intro material.]
The word “robot” is derived from the Czech robota, which means “serfdom.” It was introduced over a century ago by the Czech play R.U.R., for “Rossum’s Universal Robots.” In the play, the smartest and best-educated of the robots leads a slave revolt that wipes out most of humanity. In other words, as long as sci-fi has had the concept of intelligent machines, it has also wondered whether they might one day turn against their creators and take over the world.
The power-hungry machine is a natural literary device to generate epic conflict, well-suited for fiction. But could there be any reason to expect this in reality? Isn’t it anthropomorphizing machines to think they will have a “will to power”?
It turns out there is an argument that not only is power-seeking possible, but that it might be almost inevitable in sufficiently advanced AI. And this is a key part of the argument, now being widely discussed, that we should slow, pause, or halt AI development.
What is the argument for this idea, and how seriously should we take it?
AI’s “basic drives”
The argument goes like this. Suppose you give an AI an innocuous-seeming goal, like playing chess, fetching coffee, or calculating digits of π. Well:
* It can do better at the goal if it can upgrade itself, so it will want to have better hardware and software. A chess-playing robot could play chess better if it got more memory or processing power, or if it discovered a better algorithm for chess; ditto for calculating π.
* It will fail at the goal if it is shut down or destroyed: “you can’t get the coffee if you’re dead.” Similarly, it will fail if someone actively gets in its way and it cannot overcome them. It will also fail if someone tricks it into believing that it is succeeding when it is not. Therefore it will want security against such attacks and interfere |
bc4bb5f1-88e1-4af0-a494-73eab8707d1d | trentmkelly/LessWrong-43k | LessWrong | Marketing rationalism
Suppose you're a protestant, and you want to convince other people to do what the Bible says to do. Would you persuade them by showing them that the Bible says that they should?
Now suppose you're a rationalist, and you want to convince other people to be rational. Would you persuade them with a rational argument?
If not, how?
ADDED: I'm not talking about persuading others who already accept reason as final arbiter to adopt Bayesian principles, or anything like that. I mean persuading Joe on the street who does whatever feels good, and feels pretty good about that. Or a doctor of philosophy who believes that truth is relative and reason is a social construct. Or a Christian who believes that the Bible is God's Word, and things that contradict the Bible must be false.
Christians don't place a whole set of the population off-limits and say, "These people are unreachable; their paradigms are too different." They go after everyone. There is no class of people whom they are unsuccessful with.
Saying that we have to play by a set of self-imposed rules in the competition for the minds of humanity, while our competitors don't, means we will lose. And isn't rationality about winning?
ADDED: People are missing the point that the situation is symmetrical for religious evangelists. For them to step outside of their worldview, and use reason to gain converts, is as epistemically dangerous for them, as it is for us to gain converts using something other than reason. Contemporary Christians consider themselves on good terms with reason; but if you look back in history, you'll find that many of the famous and influential Christian theologians (starting with Paul) made explicit warnings against the temptation of reason. The proceedings from Galileo's trial contain some choice bits on the relation between reason and faith.
Using all sorts of persuasive techniques that are not grounded in religious truth, and hence are epistemically repulsive to them and corrosive |
a4b81cbc-93e4-443b-be09-739bd6247f39 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Less Wrong NH Meet-up
Discussion article for the meetup : Less Wrong NH Meet-up
WHEN: 24 November 2015 07:00:00PM (-0500)
WHERE: 269 Pearl St Manchester NH 03104
The eighth NH meet-up is Tuesday, 11/24, in Manchester, NH at 7 pm at a private residence. Light refreshments will be provided.
Have you read Rationality: from AI to Zombies, or any of the Sequences on Less Wrong? Maybe you're just a fan of Harry Potter and the Methods of Rationality. Come hang out with us and discuss optimization of whatever it is you want to optimize.
Agenda: Propagating Urges technique
You may want to bring a notebook.
https://www.facebook.com/events/989482047779770/
https://www.facebook.com/groups/695201067251306/
Discussion article for the meetup : Less Wrong NH Meet-up |
a313cfd1-9064-48e3-bbca-e7547921ca17 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | ""If a tree falls in the forest, but no one hears it, does it make a sound?"
I didn't answer that question. I didn't pick a position, "Yes!" or "No!", and defend it. Instead I went off and deconstructed the human algorithm for processing words, even going so far as to sketch an illustration of a neural network. At the end, I hope, there was no question left—not even the feeling of a question.
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
Like, say, "Do we have free will?"
The dangerous instinct of philosophy is to marshal the arguments in favor, and marshal the arguments against, and weigh them up, and publish them in a prestigious journal of philosophy, and so finally conclude: "Yes, we must have free will," or "No, we cannot possibly have free will."
Some philosophers are wise enough to recall the warning that most philosophical disputes are really disputes over the meaning of a word, or confusions generated by using different meanings for the same word in different places. So they try to define very precisely what they mean by "free will", and then ask again, "Do we have free will? Yes or no?"
A philosopher wiser yet, may suspect that the confusion about "free will" shows the notion itself is flawed. So they pursue the Traditional Rationalist course: They argue that "free will" is inherently self-contradictory, or meaningless because it has no testable consequences. And then they publish these devastating observations in a prestigious philosophy journal.
But proving that you are confused may not make you feel any less confused. Proving that a question is meaningless may not help you any more than answering it. The philosopher's instinct is to find the most defensible position, publish it, and move on. But the "naive" view, the instinctive view, is a fact about human psychology. You can prove that free will is impossible until the Sun goes cold, but this leaves an unexplained fact of cognitive science: If free will doesn't exist, what goes on inside the head of a human being who thinks it does? This is not a rhetorical question!
It is a fact about human psychology that people think they have free will. Finding a more defensible philosophical position doesn't change, or explain, that psychological fact. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it.
You could look at the Standard Dispute over "If a tree falls in the forest, and no one hears it, does it make a sound?", and you could do the Traditional Rationalist thing: Observe that the two don't disagree on any point of anticipated experience, and triumphantly declare the argument pointless. That happens to be correct in this particular case; but, as a question of cognitive science, why did the arguers make that mistake in the first place?
The key idea of the heuristics and biases program is that the mistakes we make, often reveal far more about our underlying cognitive algorithms than our correct answers. So (I asked myself, once upon a time) what kind of mind design corresponds to the mistake of arguing about trees falling in deserted forests?
The cognitive algorithms we use, are the way the world feels. And these cognitive algorithms may not have a one-to-one correspondence with reality—not even macroscopic reality, to say nothing of the true quarks. There can be things in the mind that cut skew to the world.
For example, there can be a dangling unit in the center of a neural network, which does not correspond to any real thing, or any real property of any real thing, existent anywhere in the real world. This dangling unit is often useful as a shortcut in computation, which is why we have them. (Metaphorically speaking. Human neurobiology is surely far more complex.)
This dangling unit feels like an unresolved question, even after every answerable query is answered. No matter how much anyone proves to you that no difference of anticipated experience depends on the question, you're left wondering: "But does the falling tree really make a sound, or not?"
But once you understand in detail how your brain generates the feeling of the question—once you realize that your feeling of an unanswered question, corresponds to an illusory central unit wanting to know whether it should fire, even after all the edge units are clamped at known values—or better yet, you understand the technical workings of Naive Bayes—then you're done. Then there's no lingering feeling of confusion, no vague sense of dissatisfaction.
If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question. A vague dissatisfaction should be as much warning as a shout. Really dissolving the question doesn't leave anything behind.
A triumphant thundering refutation of free will, an absolutely unarguable proof that free will cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.
You may not even want to admit your ignorance, of this point of cognitive science, because that would feel like a score against Your Team. In the midst of smashing all foolish beliefs of free will, it would seem like a concession to the opposing side to concede that you've left anything unexplained.
And so, perhaps, you'll come up with a just-so evolutionary-psychological argument that hunter-gatherers who believed in free will, were more likely to take a positive outlook on life, and so outreproduce other hunter-gatherers—to give one example of a completely bogus explanation. If you say this, you are arguing that the brain generates an illusion of free will—but you are not explaining how. You are trying to dismiss the opposition by deconstructing its motives—but in the story you tell, the illusion of free will is a brute fact. You have not taken the illusion apart to see the wheels and gears.
Imagine that in the Standard Dispute about a tree falling in a deserted forest, you first prove that no difference of anticipation exists, and then go on to hypothesize, "But perhaps people who said that arguments were meaningless were viewed as having conceded, and so lost social status, so now we have an instinct to argue about the meanings of words." That's arguing that or explaining why a confusion exists. Now look at the neural network structure in Feel the Meaning. That's explaining how, disassembling the confusion into smaller pieces which are not themselves confusing. See the difference?
Coming up with good hypotheses about cognitive algorithms (or even hypotheses that hold together for half a second) is a good deal harder than just refuting a philosophical confusion. Indeed, it is an entirely different art. Bear this in mind, and you should feel less embarrassed to say, "I know that what you say can't possibly be true, and I can prove it. But I cannot write out a flowchart which shows how your brain makes the mistake, so I'm not done yet, and will continue investigating."
I say all this, because it sometimes seems to me that at least 20% of the real-world effectiveness of a skilled rationalist comes from not stopping too early. If you keep asking questions, you'll get to your destination eventually. If you decide too early that you've found an answer, you won't.
The challenge, above all, is to notice when you are confused—even if it just feels like a little tiny bit of confusion—and even if there's someone standing across from you, insisting that humans have free will, and smirking at you, and the fact that you don't know exactly how the cognitive algorithms work, has nothing to do with the searing folly of their position...
But when you can lay out the cognitive algorithm in sufficient detail that you can walk through the thought process, step by step, and describe how each intuitive perception arises—decompose the confusion into smaller pieces not themselves confusing—then you're done.
So be warned that you may believe you're done, when all you have is a mere triumphant refutation of a mistake.
But when you're really done, you'll know you're done. Dissolving the question is an unmistakable feeling—once you experience it, and, having experienced it, resolve not to be fooled again. Those who dream do not know they dream, but when you wake you know you are awake.
Which is to say: When you're done, you'll know you're done, but unfortunately the reverse implication does not hold.
So here's your homework problem: What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about "free will"?
Your assignment is not to argue about whether people have free will, or not.
Your assignment is not to argue that free will is compatible with determinism, or not.
Your assignment is not to argue that the question is ill-posed, or that the concept is self-contradictory, or that it has no testable consequences.
You are not asked to invent an evolutionary explanation of how people who believed in free will would have reproduced; nor an account of how the concept of free will seems suspiciously congruent with bias X. Such are mere attempts to explain why people believe in "free will", not explain how.
Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.
This is one of the first real challenges I tried as an aspiring rationalist, once upon a time. One of the easier conundrums, relatively speaking. May it serve you likewise."
The excerpt provided comes from a user post on LessWrong.
What is this text talking about?
This text is a discussion on the concept of "free will" and how it is perceived and debated within philosophy and cognitive science. The author argues that the traditional approach of philosophers to either affirm or deny the existence of free will does not address the underlying cognitive processes that lead to the belief in free will.
The author proposes that instead of debating the existence of free will, we should be examining the cognitive algorithms or mental processes that lead us to perceive and argue about free will. They suggest that understanding these processes can help to dissolve philosophical questions and disputes, rather than just arguing for one side or the other.
The text also emphasizes the importance of not stopping at a simple refutation of a philosophical concept, but to continue investigating the cognitive processes behind these concepts. It concludes with a challenge to the reader to trace the cognitive processes that lead to the debate about free will.
Explain how cognitive algorithms or mental processes can be identified and studied to understand the belief in philosophical concepts such as free will.
Cognitive algorithms or mental processes can be identified and studied through a combination of methods from cognitive science, psychology, neuroscience, and computational modeling. Here's a general process:
1. Observation and Experimentation: Observing behavior and conducting experiments can provide initial insights into how people process information and make decisions. For example, researchers might observe how people behave in situations where their belief in free will is challenged.
2. Cognitive Modeling: Based on observations and experimental data, researchers can construct cognitive models - simplified representations of the mental processes that might be at work. These models can be based on various theoretical frameworks, such as decision-making theories, information processing theories, or connectionist models.
3. Neuroimaging and Neurophysiological Techniques: Techniques such as functional Magnetic Resonance Imaging (fMRI), Electroencephalography (EEG), and Transcranial Magnetic Stimulation (TMS) can be used to study brain activity associated with specific mental processes. This can help identify which areas of the brain are involved in the belief in philosophical concepts like free will.
4. Computational Modeling: Computational models can simulate cognitive processes and predict behavior. These models can be used to test hypotheses about the underlying cognitive algorithms. For example, a model might simulate how beliefs about free will could emerge from more basic cognitive processes.
5. Iterative Refinement: Based on the results of these investigations, the cognitive models can be refined and adjusted. This is an iterative process, with each round of experiments and modeling leading to more accurate and detailed models of the cognitive processes.
Through this process, researchers can gain a better understanding of the cognitive algorithms that underlie our beliefs in philosophical concepts. It's important to note, however, that this is a complex task and our understanding of these processes is still evolving.
Explain the process of iterative refinement in cognitive modeling and its significance in understanding complex philosophical concepts.
Iterative refinement in cognitive modeling is a process of continuously updating and improving models based on new data, insights, or theoretical advancements. It's a cycle of hypothesis formation, testing, analysis, and refinement.
Here's a step-by-step breakdown of the process:
1. Hypothesis Formation: Researchers start by forming a hypothesis about a particular cognitive process. For instance, they might hypothesize that belief in free will arises from a specific set of cognitive algorithms.
2. Model Building: Based on this hypothesis, they build a cognitive model. This model is a simplified representation of the mental processes that are hypothesized to underlie the belief in free will.
3. Model Testing: The model is then tested, often through computer simulations or empirical experiments. The goal is to see if the model can accurately predict behavior or cognitive outcomes.
4. Analysis: The results of these tests are analyzed. If the model's predictions align with the observed data, this provides support for the model. If they don't, this suggests that the model may need to be revised.
5. Refinement: Based on the analysis, the model is refined. This might involve tweaking certain parameters, adding new components, or even overhauling the model entirely.
6. Iteration: The refined model is then subjected to further testing and analysis, and the cycle repeats.
This process of iterative refinement is crucial for understanding complex philosophical concepts because it allows for a gradual, step-by-step unpacking of these concepts. Each iteration of the model brings us closer to a more accurate representation of the cognitive processes at play.
By continuously refining the model, researchers can progressively improve their understanding of the concept, peeling back layers of complexity and getting closer to the underlying cognitive reality. This can ultimately lead to a more nuanced and detailed understanding of philosophical concepts like free will, going beyond simple binary debates to uncover the intricate cognitive machinery that fuels such beliefs. |
7ac0d31e-5a3d-428a-b744-7d9ee06588d8 | trentmkelly/LessWrong-43k | LessWrong | Group Rationality Diary, July 1-15
This is the public group instrumental rationality diary for July 1-15.
> It's a place to record and chat about it if you have done, or are actively doing, things like:
>
> * Established a useful new habit
> * Obtained new evidence that made you change your mind about some belief
> * Decided to behave in a different way in some set of situations
> * Optimized some part of a common routine or cached behavior
> * Consciously changed your emotions or affect with respect to something
> * Consciously pursued new valuable information about something that could make a big difference in your life
> * Learned something new about your beliefs, behavior, or life that surprised you
> * Tried doing any of the above and failed
>
> Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.
Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating!
Next diary -- July 15-31
Immediate past diary -- June 1-30
Rationality Diaries archive |
7a691787-1599-4d50-8294-f3a255d8469f | trentmkelly/LessWrong-43k | LessWrong | Am I going for a job interview with a woo pusher?
I responded to a job posting on Indeed and have been called in for an interview. This is the first job interview I've had in a long time.
http://www.indeed.com/viewjob?from=appsharedroid&jk=60abfc78c83955a2
Unfortunately I think the people at http://njneurofeedback.com might be selling woo, and I have some serious moral objections to defrauding people with placebo treatments. Are they pushing bullshit?
Edit: I went to the interview. It turns out that they misinterpreted my resume and they quickly told me that they wanted someone with more experience and sent me home. |
7b1265c3-71c5-484a-9ea6-88e60c99c3ec | StampyAI/alignment-research-dataset/agentmodels | Tutorial: Modeling Agents with Probabilistic Programs | Modeling Agents with Probabilistic Programs
---
layout: chapter
title: Introduction
description: "Motivating the problem of modeling human planning and inference using rich computational models."
is_section: true
---
Imagine a dataset that records how individuals move through a city. The figure below shows what a datapoint from this set might look like. It depicts an individual, who we'll call Bob, moving along a street and then stopping at a restaurant. This restaurant is one of two nearby branches of a chain of Donut Stores. Two other nearby restaurants are also shown on the map.

Given Bob's movements alone, what can we infer about his preferences and beliefs? Since Bob spent a long time at the Donut Store, we infer that he bought something there. Since Bob could easily have walked to one of the other nearby eateries, we infer that Bob prefers donuts to noodles or salad.
Assuming Bob likes donuts, why didn't he choose the store closer to his starting point ("Donut South")? The cause might be Bob's *beliefs* and *knowledge* rather than his *preferences*. Perhaps Bob doesn't know about "Donut South" because it just opened. Or perhaps Bob knows about Donut South but chose Donut North because it is open later.
A different explanation is that Bob *intended* to go to the healthier "Vegetarian Salad Bar". However, the most efficient route to the Salad Bar takes him directly past Donut North, and once outside, he found donuts more tempting than salad.
We have described a variety of inferences about Bob which would explain his behavior. This tutorial develops models for inference that represent these different explanations and allow us to compute which explanations are most plausible. These models can also simulate an agent's behavior in novel scenarios: for example, predicting Bob's behavior if he looked for food in a different part of the city.
<!-- Remove because we don't do hierarchical case
Now, suppose that our dataset shows that a significant number of different individuals took exactly the same path as Bob. How would this change our conclusions about him? It could be that everyone is tempted away from healthy food in the way Bob potentially was. But this seems unlikely. Instead, it is now more plausible that Donut South is closed or that it is a new branch that few people know about.
This kind of reasoning, where we make assumptions about the distributions of beliefs within populations, will be formalized and simulated in later chapters. We will also consider multi-agent behavior where coordination or competition become important.
-->
## Agents as programs
### Making rational plans
Formal models of rational agents play an important role in economics refp:rubinstein2012lecture and in the cognitive sciences refp:chater2003rational as models of human or animal behavior. Core components of such models are *expected-utility maximization*, *Bayesian inference*, and *game-theoretic equilibria*. These ideas are also applied in engineering and in artificial intelligence refp:russell1995modern in order to compute optimal solutions to problems and to construct artificial systems that learn and reason optimally.
This tutorial implements utility-maximizing Bayesian agents as functional probabilistic programs. These programs provide a concise, intuitive translation of the mathematical specification of rational agents as code. The implemented agents explicitly simulate their own future choices via recursion. They update beliefs by exact or approximate Bayesian inference. They reason about other agents by simulating them (which includes simulating the simulations of others).
The first section of the tutorial implements agent models for sequential decision problems in stochastic environments. We introduce a program that solves finite-horizon MDPs, then extend it to POMDPs. These agents behave *optimally*, making rational plans given their knowledge of the world. Human behavior, by contrast, is often *sub-optimal*, whether due to irrational behavior or constrained resources. The programs we use to implement optimal agents can, with slight modification, implement agents with biases (e.g. time inconsistency) and with resource bounds (e.g. bounded look-ahead and Monte Carlo sampling).
### Learning preferences from behavior
The example of Bob was not primarily about *simulating* a rational agent, but rather about the problem of *learning* (or *inferring*) an agent's preferences and beliefs from their choices. This problem is important to both economics and psychology. Predicting preferences from past choices is also a major area of applied machine learning; for example, consider the recommendation systems used by Netflix and Facebook.
One approach to this problem is to assume the agent is a rational utility-maximizer, to assume the environment is an MDP or POMDP, and to infer the utilities and beliefs and predict the observed behavior. This approach is called "structural estimation" in economics refp:aguirregabiria2010dynamic, "inverse planning" in cognitive science refp:ullman2009help, and "inverse reinforcement learning" (IRL) in machine learning and AI refp:ng2000algorithms. It has been applied to inferring the perceived rewards of education from observed work and education choices, preferences for health outcomes from smoking behavior, and the preferences of a nomadic group over areas of land (see cites in reft:evans2015learning).
[Section IV](/chapters/4-reasoning-about-agents.html) shows how to infer the preferences and beliefs of the agents modeled in earlier chapters. Since the agents are implemented as programs, we can apply probabilistic programming techniques to perform this sort of inference with little additional code. We will make use of both exact Bayesian inference and sampling-based approximations (MCMC and particle filters).
## Taster: probabilistic programming
Our models of agents, and the corresponding inferences about agents, all run in "code boxes" in the browser, accompanied by animated visualizations of agent behavior. The language of the tutorial is [WebPPL](http://webppl.org), an easy-to-learn probabilistic programming language based on Javascript refp:dippl. As a taster, here are two simple code snippets in WebPPL:
~~~~
// Using the stochastic function `flip` we build a function that
// returns 'H' and 'T' with equal probability:
var coin = function() {
return flip(.5) ? 'H' : 'T';
};
var flips = [coin(), coin(), coin()];
print('Some coin flips:');
print(flips);
~~~~
~~~~
// We now use `flip` to define a sampler for the geometric distribution:
var geometric = function(p) {
return flip(p) ? 1 + geometric(p) : 1
};
var boundedGeometric = Infer({
model() { return geometric(0.5); },
method: 'enumerate',
maxExecutions: 20
});
print('Histogram of (bounded) Geometric distribution');
viz(boundedGeometric);
~~~~
In the [next chapter](/chapters/2-webppl.html), we will introduce WebPPL in more detail.
|
7f95e0ab-eca6-48ae-8090-5b3a4fd0d66d | StampyAI/alignment-research-dataset/arxiv | Arxiv | DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation
I Introduction
---------------
With the rapid development of new technologies such as cloud, edge, and mobile computing, the number and diversity of available services are dramatically exploding, and services have become increasingly important to people’s daily work and life. The increasing number and diversity of services bring significant challenges to effective service management and reuse. Consequently, recommending suitable services for developers based on requirements to help them reduce the burden of service selection in creating applications (mashups) becomes a non-trivial issue, and such task is often called service bundle recommendation in services computing.
Service bundle recommendation refers to recommending a set of services based on these services’ functions and history compositions to satisfy the user’s explicit and implicit requirements[WU2015133]. There have been many service recommendation studies from different perspectives. Existing approaches can be categorized into natural language processing (NLP) based approaches[faieq2019context, lei2015time, lin2018nl2api], graph-based approaches[hu2019poi, chang2021graph, cao2019efficient], and hybrid approaches [gao2016seco, ma2021deep]. These approaches have achieved remarkable results. However, ignoring the following factors limits the performance of their approaches and the application scenarios in reality:
(1) Cold Start: Most existing approaches leverage the interaction history between mashups and services to recommend missing services. They work well in normal situations like user-item recommendations and user-movie recommendations. The co-occurrence between users(mashups) and items(movies/services) can be repeated. However, in the application scenario of mashup creation, each mashup only appears once and without any interaction with existing services. The lack of such information decreases the performance of existing approaches. This phenomenon is also known as the cold start of requirements, which has recently started to attract the attention of researchers[ma2021deep, gu2021csbr]. Additionally, existing approaches assume all services have enough description document and historical usage information (e.g., invocation history, co-invocation, and popularity). While, in the real world, services are created in temporal order, and newly created services often do not accumulate enough historical usage information or even no historical usage information. Or, due to unregulated development, some service providers do not provide a comprehensive service description. These services are usually ignored in existing approaches. We call this phenomenon the cold start of services, which has not received enough attention in service bundle recommendations.
(2) Evolving Service Social: Since, in most cases, services do not actively update their profile promptly[wang2021external], existing approaches assume services’ function and quality are static. However, we argue that the quality of services evolves, and the functionality of services evolves in a latent way as their social environment changes. For example, a service that other applications have not invoked for a long time may indicate that the service quality has degraded or that an alternative service with better similar functionality is available and should be avoided when making service recommendations.
Readmill111<https://www.programmableweb.com/api/readmill> is an example of service function evolving[bai2017sr], which is designed for “Social” as its profile says “Readmill is an online and mobile platform for readers to share information about …, allowing them to highlight and discuss sections of eBooks with other users. . .”. The mashups that invoke Readmill are gradually moving from the “Social” domain to the “Book” domain, indicating that the social environment of Readmill is gradually evolving from “Social” to “Books”.
(3) Representation Gaps between Services and Requirements: There is a large gap between services and requirements representation, often overlooked in existing approaches. The representation of requirements focuses on the functionality and value that the constructed mashup can provide to the user and is accompanied by an extensive domain vocabulary.
In contrast, the representation of services is more freedom. Services can be described using natural language text, but they may describe their performance and input/output format in a technical-oriented language style. Services can also be represented using a set of properties, such as QoS, popularity.
For example, QuickMocker222<https://www.programmableweb.com/mashup/quickmocker> is a mashup where the requirement it satisfied is described as “QuickMocker is an online API tool that allows to create your own public domain and a fake web services…”. While its component service Mocky333<https://www.programmableweb.com/api/mocky> are described as “Mocky is available as a web console, downloadable software, or REST API. The API version of the service accepts calls issued in JSON and JSONP. The Mocky website is available in English, French, and Portuguese”.
These representation gaps make services and requirements exist in different semantic spaces. The arbitrary mixing of features in two different spaces in existing methods can significantly affect the performance of recommendations.

Fig. 1: The correspondence between the innovations of our proposed DySR model and the service bundle recommendation challenges
This paper proposes a dynamic representation learning and aligning based model called DySR to solve the three challenges mentioned above. Fig. [1](#S1.F1 "Fig. 1 ‣ I Introduction ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") shows how the innovation modules in DySR address the challenges mentioned above, where Learnable Transformation Function, Evolving Representation of Service, and Requirement-Service Matching Function are used to address Representation Gaps, Evolving Service Social, and Cold Start, respectively. In aadition, Joint Learning & GAN-style Training is used to combine the other three modules so that they can provide complementary information to each other:
1. Learnable Transformation Function: We propose a learnable transformation function that aligns the representation of requirements to the semantic space where the representation of services resides, thus eliminating the representation gap between requirements and services.
2. Evolving Representation of Service: We form a dynamically evolving graph of services based on co-invocation relations. Then we perform dynamic representation learning on this dynamic graph to obtain a dynamic representation of the services to solve evolving service social environment.
3. Requirement-Service Matching Function: We introduce a learnable function called the requirement-service matching function to evaluate the probability of an invocation between a given requirement and a service. This function works with only the aligned representation of the requirements and services and does not require other information (e.g., known component service, historical usage). Thus the cold start of mashups and the cold start of services are solved at the same time.
4. Joint Learning & GAN-style Training: DySR models the influence mechanism of service and requirement by jointly learning the dynamic representation learning task of services and service recommendation task. A Generative Adversarial Network (GAN)[2014Generative] like training approach is also proposed to model the process of requirement-induced service evolution and service evolution reacting to requirement, unlike GAN, where the two tasks are not adversarial but cooperative.
Besides the proposed DySR model, the contributions of this paper also include: 1) We provide an in-depth analysis of the issues that have not been sufficiently discussed in real-world service recommendation scenarios; 2) We have conducted extensive experiments on the real-world dataset ProgrammableWeb, which shows that our proposed model significantly outperforms several state-of-the-art approaches in terms of recommendation performance; 3) We have also constructed additional experiments to discuss the effects of the different components of DySR.
The remainder of this paper is organized as follows: In Section [II](#S2 "II Related Work ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"), we introduce related work. In Section [III](#S3 "III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"), we describe relevant details of the DySR model. In Section [IV](#S4 "IV Experiment Settings ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"), we give the details of the experiment settings. In Section [V](#S5 "V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"), we present the experiment results and discuss how DySR works. In the final section, we present the conclusion.
Ii Related Work
----------------
The use of service bundle recommendations to satisfy the requirements of users creating new mashups has recently attracted a great deal of academic and industrial attention. Service recommendation approaches can be divided into the following categories based on the information used: NLP-based approaches, graph-based approaches, and hybrid approaches.
###
Ii-a NLP-based Approaches
The main idea of NLP-based approaches is to make recommendations by measuring the similarity between the description of the candidate services and the requirement. Keywords[he2016keyword, he2017efficient] and TF-IDF[xia2014category] are used to match services to satisfy mashup creation. However, these matching procedures cannot really understand semantic and suffer from poor performance.
Ontology-based approaches annotate requirements and services with domain ontologies and use these ontologies to match high-level concepts or calculate their semantic similarities[al2015semantic, karthikeyan2018fuzzy, rupasingha2019alleviating]. However, the lack of suitable domain ontologies and the huge manual annotation cost of ontology construction make ontology-based approaches difficult to use in real-world scenarios. Some studies attempt to use latent semantics to denote text features. Currently, the most common technique used to solve this problem in the field of service computing is the topic model[lin2018nl2api, gu2021csbr]. For example, Li et al. [li2014novel] use a topic model to explore the semantic relationships between mashups and services. Zhong et al. [zhong2016web] refactor descriptions by using Author-Topic Model [rosen2012author] to eliminate the gaps between mashups and services. However, small data volumes, high noise in data, and ignoring word orders make the performance of topic model-based methods unsatisfactory; for example, the recent SOTA method [gu2021csbr] based on topic models has an F1@5 value of only about 30% on the ProgrammableWeb dataset. With the great success of pre-trained language models (PLMs) and deep learning models in the NLP, some researchers start using PLMs and deep learning to solve the service recommendation problem. For example, Bai et al. [bai2017dltsr] designed a stacked denoising autoencoder (SADE) to extract features for the recommendation. However, experimental results show that the simple introduction of PLMs and deep learning does not significantly improve recommendation performance, possibly due to the inability of existing models to directly adapt to service recommendation and negative transfer [wang2019characterizing].
The major drawback of the NLP-based approaches is the inability to leverage the social information between services, leading to recommended service bundles with combinations of services that cannot co-occur due to real-world constraints, despite being functionally compatible.
###
Ii-B Graph-based Approaches
Unlike NLP-based approaches, graph-based approaches make recommendations by mining information from historical interaction between services or requirements (users). For example, [wang2019duskg, hu2019poi, mezni2021context] design recommendation algorithms based on the feature learned on knowledge graph.
The graph-based approach usually works in conjunction with collaborative filtering (CF). For example, Chen et al. [zheng2012collaborative, chen2010regionknn] propose a neighborhood integrated matrix factorization approach to predict the quality of service (QoS) of candidate services. Chang et al. [chang2021graph] designed a graph-based matrix factorization approach to predict QoS, then use the QoS to select services. Besides predicting QoS, graph is also applied to find similar users or services. For example, Maardji et al. [maaradji2011social] proposes a frequent pair mining method for mashup development. Qi et al. [qi2017data] adopts a hybrid random walk to compute the similarities between users or services, and a CF model is designed for service recommendation. [xie2019integrated, liang2016meta] build a heterogeneous information network using various information of services and mashups to measure the similarity between mashups, and then use the user-based CF to ranking candidate services.
Graph-based approaches cannot solve the cold start problem, as new requirements do not have any historical information.
###
Ii-C Hybrid Approaches
Considering the complementarity of textual and graph information, a number of hybrid service recommendation approaches combining the two types of data have been proposed in recent years.
Li et al. [li2014novel] add the invocation relations between requirements and services to a latent Dirichlet allocation (LDA) model to enable the topic model learn the relationship between services and requirements, . [gao2016seco, xia2014category] incorporates data structure made up of service and their co-invocation records into LDA. Jain et al. [jain2015aggregating] and Samanta et al. [samanta2017recommending] use topic models and neighbor interaction probabilities to calculate similarity scores between services and requirements, then multiply these scores to rank candidate services.
Deep learning-based approaches are becoming the mainstream of hybrid methods. For example, Xiong et al. [xiong2018deep] integrates the invocation relations between services and requirements as well as their description similarity into a deep neural network (DNN). Chen et al. [chen2018software] propose a preference-based neural collaborative filtering[he2017neural] recommendation model, which uses multi-layer perceptron to capture the non-linear user-item relationships and obtain abstract data representation from sparse vectors. Ma et al. [ma2021deep] utilize the powerful representation learning abilities provided by deep learning to extract textual features and features from various types of interactions between mashups and services. Wu et al. [9492754] propose a neural framework based on multi-model fusion and multi-task learning, whiche exploits a semantic component to generate representations of requirements and introduces a feature interaction component to model the feature interaction between mashups and services.
Although the hybrid approach is a significant improvement over the NLP-based approaches and graph-based approaches, the current SOTA approach[ma2021deep] still has an F1@5 value below 40% on the ProgrammableWeb dataset. This is mainly caused by ignoring the difference in representation between services and requirements and the evolving service social environment. The DySR model proposed in this paper is a hybrid approach that achieves F1@5 values close to 70% on the ProgrammableWeb dataset by tackling the representation gap and service evolution.
Iii Dynamic Representation Learning and Aligning based Model
-------------------------------------------------------------
| | |
| --- | --- |
| Notion | Description |
| r | Requirement |
| s | Service |
| t | Timestamp |
| vr | The latent representation of requirement r |
| Zt | The set of all services latent representation updated after time t |
| zts | The representation of service s being updated after a co-invocation event involving s at time t |
| z¯ts | Most recently updated latent representation of service s just before t |
| Ψ | Transformation function used for align representation of requirements and services |
| Λ | Requirement-Service matching Function |
| ^ytr,s | The probability of s being a component service of r at time t |
| Ω | The set of trainable parameters in the unsupervised learning module |
| St | Temporary attention matrix at time t |
| At | Service-Service co-invocation adjacency matrix at time t |
| λts1,s2 | Conditional intensity between s1 and s2 at time t |
| o | o=(s1,s2,t) denotes a co-invocation event involving s1 and s2 at time t |
TABLE I: Notions used in this paper
In this section, we introduce the details of the DySR model. In Section [III-A](#S3.SS1 "III-A Overall Framework ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") we first explain the joint learning framework of DySR. Next, we detail two main subtasks in DySR, supervised service recommendation task and unsupervised evolving service representation task,in Section [III-B](#S3.SS2 "III-B Supervised Service Bundle Recommendation ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") and Section [III-C](#S3.SS3 "III-C Unsupervised Evolving Service Representation ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"), respectively. Finally, in Section [III-D](#S3.SS4 "III-D GAN-style training step ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") we describe the GAN-style training process. Table [I](#S3.T1 "TABLE I ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") list frequently-used notions in this section and their meanings.
###
Iii-a Overall Framework

Fig. 2: Overview framework of DySR model
As shown in Fig. [2](#S3.F2 "Fig. 2 ‣ III-A Overall Framework ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"), the proposed DySR model consists of two subtasks: a supervised service bundle recommendation task and an unsupervised evolving service representation task. The service recommendation subtask is used to satisfy our need to recommend suitable services for a new requirement, and this task is used in both the training and inference stage of the DySR model. The representation gaps between requirement and service are eliminated, and the requirement-service matching function is learned in this subtask. The evolving service representation subtask is used to solve the service evolving problem. It is a supplementary task to the service recommendation subtask and is used to provide a suitable service representation for service bundle recommendation, so it will only be used in the training stage of the DySR model.
During the training stage, the input of DySR model is a requirement r that is satisfied at historical time t by a set of component services C+={s1,s2,…,sn}, which is used in the supervised task as ground truth. And a set of co-invocation event O={o=(su,sv,t)|su,sv∈C+ and su≠sv} is generated from C+ as the training samples of the unsupervised task. While in the inference stage, the input of DySR model is a requirement r, and the output is a set of recommended component services ^C={^s1,^s2,…,^sm}.
###
Iii-B Supervised Service Bundle Recommendation
For a requirement r at time t, we denote the latent representation as vr∈Rdr, a dense vector obtained by a certain encoding method (e.g. text encoding using pre-trained language model), where dr is the dimension of the requirement’s latent representation. We use z¯ts∈Rds to denote the recently updated latent representation of candidate service s just before t, where ds is the dimension of each service’s latent representation. And Z¯t is all candidate services latent representation. The obtaining of Z¯t is discussed in detail in Section [III-C](#S3.SS3 "III-C Unsupervised Evolving Service Representation ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation").
####
Iii-B1 Transformation Function

Fig. 3: Illustration of transformation function
As we state in Section [I](#S1 "I Introduction ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"), a key problem in real-world service recommendation is to eliminate the gap between requirement representations and service representations. Transformation function Ψ can be learned to align the representation space to solve this problem, and the overall idea is illustrated in Fig. [3](#S3.F3 "Fig. 3 ‣ III-B1 Transformation Function ‣ III-B Supervised Service Bundle Recommendation ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation").
In this paper, we use affine transformation[2016Affine] to implement the transformation function Ψ:
| | | | |
| --- | --- | --- | --- |
| | Ψ(vr)=WΨ⋅vr+bΨ | | (1) |
where WΨ∈Rdr×ds is affine transformation matrix and bΨ∈Rds is translation vector. The main advantage of affine transformation is preserves collinearity and ratios of distance, which means that similar requirement remain similar after affine transformation.
####
Iii-B2 Requirement-Service Matching Function
After transforming the requirement representation into the same semantic space of the service representations. We designed a requirement-service matching function that first fuses the representations of requirement r and candidate service s under the same semantic space and then outputs the probability that s is a component service of r at time t, ^ytr,s:
| | | | |
| --- | --- | --- | --- |
| | Λ(Ψ(vr),z¯ts)=z¯ts⋅WΛ⋅v⊺r | | (2) |
where ⊺ is vector transpose operation and WΛ∈Rds×ds is a trainable parameter. Since we want to know the probability, we adopt a sigmoid activation function in the output the requirement-service matching function:
| | | | |
| --- | --- | --- | --- |
| | ^ytr,s=σ(Λ(Ψ(vr),z¯ts)) | | (3) |
####
Iii-B3 Loss Function
For a given requirement r with component services C+={s1,s2,…,sn} at time t, we minimize the following loss function:
| | | | |
| --- | --- | --- | --- |
| | L1=−∑s∈C+∪C−ytr,slog^ytr,s+(1−ytr,s)log(1−^ytr,s) | | (4) |
where ytr,s∈{0,1} denotes whether s is a component service of r at time t, and C− denotes a set of negative samples with services that are not component services of r at time t. Usually, a requirement is satisfied using a limited number of services, but the number of candidate services is much larger than the number of services required. So it is not appropriate to use all unselected services as negative samples, and we select negative samples of number |C−|=6|C+| by random sampling.
###
Iii-C Unsupervised Evolving Service Representation
Unlike existing approaches that maintain a requirement-service invocation graph, DySR only needs to maintain a dynamic service co-invocation graph, which is more practical:
* Compared to services, requirements are one-off and grows rapidly in volume, making the requirement-service invocation graph very large and sparse over time. On the one hand, this does not facilitate the extraction of information, and on the other hand, requires more expensive hardware resources.
* Requirements are usually created by users, which may involve privacy and security issues. For example, users may not want their requirements to be available to other users. Service co-invocation graph can solve this problem, as it only records co-invocations between services but not specific requirement content.
The service co-invocation graph can be represented as a dynamic adjacency matrix At∈R|Ct|×|Ct|, where Ct is the set of services at time t, and Atsu,sv is the number of times service su and sv have been co-invoked before time t. The phenomenon of service evolution can be reflected by the evolution of the service co-invocation graph, which can be seen as a co-invocation event-driven temporal point process. We use o=(su,sv,t) to represent a co-invocation event, which means su and sv are co-invoked at time t.
We use a variant of the DyRep[trivedi2019dyrep] model to learn the service representation on a dynamic service co-invocation graph. For a given co-invocation event o=(su,sv,t)∈O, the main training steps consist of the following:
####
Iii-C1 Service Representation Update
When an co-invocation event occurs, the representation of participating service su is updated based on the three terms of Self-propagation, Exogenous Drive and Attention-based Aggregation. Specially, for an event of service su at time t, updating ztsu as:
| | | | |
| --- | --- | --- | --- |
| | zsu(t)=σ(Wah¯trec(su)Aggregation+Wrecz¯tsusuSelf−propagation+Wt(t−¯tsu)Exogenous Drive) | | (5) |
where Wa∈Rds×ds, Wrec∈Rds×ds and Wt∈Rds are learned parameters used to control the effect of above-mentioned three terms on the computation of service representation, respectively. σ(⋅)) is a nonlinear function. z¯tsusu is the previous representation of service su. ¯t denotes the time point just before current event time t and ¯tsu represent the time point of last co-invocation event involved su. h¯trec(su)∈Rd is the output representation obtained from the aggregation of service su’s neighbors N¯tsu={sv:A¯tsu,sv>0}:
| | | | |
| --- | --- | --- | --- |
| | h¯trec(su)=max(softmax(S¯tsu)sr(Whz¯tusr),∀sr∈Nsu)) | | (6) |
where Wh∈Rds×ds is learned parameters. Temporal attention S¯tsu is used to control the amount of information propagated from service su’s neighbors, which is updated by a hard-coded algorithm [1](#algorithm1 "Algorithm 1 ‣ III-C1 Service Representation Update ‣ III-C Unsupervised Evolving Service Representation ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") adopted from DyRep[trivedi2019dyrep].
Input : Co-invocation event record o=(su,sv,t) ;
Co-invocation event intensity λtsu,sv ;
Service representations of previous time Z¯t ;
Most recently updated A¯t and S¯t ;
Output : Updated At and St;
1 St=S¯t
2 At=A¯t
3 if *Atsu,sv=0* then
4 Atsu,sv=Atsv,su=1
5 for *each j∈{su,sv}* do
6 b=1|Ntj|
7 y←Stj
8 if *A¯tsu,sv>0* then
9 /\* i is the other service involved in the event. \*/
10 yi=b+λtsu,sv
11
12 else
13 b′=1|N¯tj|
14 x=b′−b
15 yi=b+λtsu,sv
16 yw=yw−x∀w≠i,yw≠0
17
18 end if
19 Normalize y and set Stj←y
20
21 end for
return At and St
Algorithm 1 Update Algorithm for S and A
Conditional intensity λtsu,sv modes the occurrence of co-invocation between su and sv at time t:
| | | | |
| --- | --- | --- | --- |
| | λtsu,sv=ψlog(1+exp{ω⊺[z¯tsu;z¯tsv]ψ}) | | (7) |
where ψ is trainable scalar parameter, which denotes the rate of events arising from a point process, and ω∈R2ds is designed to learn time-scale specific compatibility. [;] denotes concatenation.
####
Iii-C2 Loss Function
For a given set O of P co-invocation events, we learn parameters Ω={Wa,Wrec,Wt,Wh,ψ,ω} by minimizing the following loss function:
| | | | |
| --- | --- | --- | --- |
| | L2=Levents+Lnonevents | | (8) |
where Levents=−∑Pplog(λop) is the total negative log of the intensity rate for all co-invocation events between service spu and service spv; Lnonevents=∫T0Γ(τ)dτ=∑Mmlog(λom) represent total survival probability for co-invocation events that do not happen. It is intractable to compute all non-positive nonevent, we use Monte Carlo method to sample a subset to compute this term, with following [trivedi2019dyrep] setting M=5P.
###
Iii-D GAN-style training step
To model the process that requirements trigger service evolution and that service evolution reacting the selection of services by requirements, we adopt a GAN-style training step. Specifically, we use two independent Adam optimizers[kingma2014adam] opt1 and opt2 to optimize the parameters {Ψ,Λ} in the supervised service recommendation task and parameters Ω in the unsupervised evolving service representation task. It is also important to note that we perform supervised task optimization for each batch of samples before unsupervised task optimization. . The pseudo-code for the GAN-style training algorithm is given in Algorithm [2](#algorithm2 "Algorithm 2 ‣ III-D GAN-style training step ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation").
Input : Requirement sample set Y ;
Number of epochs n ;
Parameter sets {Ψ,Λ,Ω} ;
Optimizers opt1 and opt2 ;
Output : Updated parameter sets {Ψ,Λ,Ω};
1 for *epoch = 1,…,n* do
2 for *each requirement (vr,C+)∈Y* do
3 /\* Optimize supervised service recommendation task related parameters \*/
4 Do forward step described in Section [III-B](#S3.SS2 "III-B Supervised Service Bundle Recommendation ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation")
5 Update {Ψ,Λ} to minimize L1 in Eq. [4](#S3.E4 "(4) ‣ III-B3 Loss Function ‣ III-B Supervised Service Bundle Recommendation ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") with opt1
6 /\* Optimize unsupervised unsupervised evolving service dynamic task related parameters \*/
7 Generate co-invocation event set O from C+
8 for *each co-invocation evnet (su,sv,t)∈O* do
9 Do forward step described in Section [III-C](#S3.SS3 "III-C Unsupervised Evolving Service Representation ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation")
10 Update Ω to minimize L2 in Eq. [8](#S3.E8 "(8) ‣ III-C2 Loss Function ‣ III-C Unsupervised Evolving Service Representation ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") with opt2
11
12 end for
13
14 end for
15
16 end for
return {Ψ,Λ,Ω}
Algorithm 2 Traning Algorithm for DySR
Iv Experiment Settings
-----------------------
###
Iv-a Dataset & Metrics
We evaluate the proposed DySR model on the real-world ProgrammableWeb dataset, which is also the dataset used in existing service recommendation studies.
ProgrammableWeb: The dataset is the largest online Web service registry. We collected a total of 23,520 APIs and 7,947 mashups on Oct 10, 2020. The mashups without functional description, the services that have not been invoked, and the mashups with fewer than two component services were removed. The experimental dataset contains 3,380 mashups, whose functional descriptions are used as requirements, and 720 APIs. We sorted the mashups by when they were created, and we initialize the adjacency matrix A using the co-invocations from the earliest 300 mashups. The next 2,400 mashups are used as the training set, and the remaining 680 mashups are used as the test set.
We adopted the following evaluation metrics to measure the recommendation performance:
| | | | |
| --- | --- | --- | --- |
| | Precision@N=1|R|∑r∈R|^Cr∩Cr||^Cr| | | (9) |
| | | | |
| --- | --- | --- | --- |
| | Recall@N=1|R|∑r∈R|^Cr∩Cr||Cr| | | (10) |
| | | | |
| --- | --- | --- | --- |
| | F1@N==1|R|∑r∈R|^Cr∩Cr||Cr|+|^Cr| | | (11) |
where R is the set of requirements in the test set and |R| denotes the size of R. For requirement r, ^Cr is the recommended services, while Cr is its actual component services.
###
Iv-B Implementation Details
We use bert-base-uncased provided by Transformers[wolf-etal-2020-transformers] to obtain requirement representation with the dimension dr set to 768, and it should be noted that we do not do fine-tune on bert-base-uncased. Service representations Z0 are initialized by a 128d Word2Vec[mikolov2013word2vec] word embedding trained on text8444<http://mattmahoney.net/dc/text8.zip>. We use the two different pre-trained language models to reflect the representation gap between requirement and service. It should be noted that the initial representation of the service in the DySR model can theoretically be obtained using other information or in a random way. We implement a DySR variant called DySR-Rand, which does not need any prior knowledge of service and uses a randomly initialized representation of the service.
Gradient clipping is used in opt2 to avoid gradient explosion, and the clipping value is set to 100. We do not use dropout, and batch size is set to 50. We conduct five independent experiments for each approach to preventing serendipity, and early-stop is applied to avoid over-fitting. All the results reported are average results.
| | | |
| --- | --- | --- |
| Performance comparison of different approaches.
(a) Precision@N
| Performance comparison of different approaches.
(b) Recall@N
| Performance comparison of different approaches.
(c) F1@N
|
Fig. 4: Performance comparison of different approaches.
###
Iv-C Baselines
To evaluate the effectiveness of model, we select five state-of-the-art service recommendation approaches:
1. AFUP[jain2015aggregating]: This approach first leverage probabilistic topic models to compute relevance between a service and a given requirement. description. And then use collaborative filtering to estimate the probability of a service being used by existing similar requirements. Finally, the multiplies these two term based on Baye’s theorem to rank candidates service.
2. SFTN[samanta2017recommending]: This approach extend AFUP by using hierarchical dirichlet process (HDP) and probabilistic matrix factorization (PMF) to tackle cold start issues and usage history.
3. PNCF[chen2018software]: This approach use multi-layer perceptron to capture the non-linear user-item relationships and obtain abstract data representation from sparse vectors. However, text features were not considered in their original version, so we constructed two variants: PNCF-HDP using HDP adopted in SFTN to obtain text features, and PNCF-Deep using a pre-trained language model to obtain text features.
4. MISR[ma2021deep]: This approach propose a deep neural network that can captures multiplex interactions between services and requirements to extract hidden structures and features for better recommendation performance.
It should be noted that most baselines do not provide official code, and we can only reproduce it as described in their papers. In some cases, we were not able to reproduce the results they reported, possibly due to different ways of dividing the dataset and missing important parameter values. In these cases, we chose to directly compare the results reported in [ma2021deep].
V Results & Discussion
-----------------------
###
V-a Overview
In this section, we give an overview performance comparison between our proposed DySR and baseline approaches. Fig. [4](#S4.F4 "Fig. 4 ‣ IV-B Implementation Details ‣ IV Experiment Settings ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") shows the performance comparison of different approaches, showing that the DySR and DySR-Rand outperform all the five baselines across all evaluation metrics.
The AFUP and PNCF-HDP performed the worst of all the baselines for the following two main reasons: 1) They use a topic model to extract service/requirement representation from the description text, which ignores the order of words and further leads to lost semantic information; 2) Rough handling of service historical usage information. The introduction of probabilistic matrix factorization allows SFTN to handle historical usage information somewhat better, but the poor service/requirement representations obtained by the topic model remain limiting to its performance. The PNCF-Deep and MISR perform better than the other baselines because they use a pre-trained language model to obtain better representations of the services/requirements. In addition, MISR performs the best of all the benchmark methods because it takes into account multiple types of interactions between services and requirements.
Although DySR-Rand does not require any a priori knowledge of the service. DySR-Rand obtains competitive results with DySR, which shows that our proposed model is well suited to eliminate the gap between requirements and services.
DySR and DySR-Rand require less information than the baselines, while their performance is significantly better than all baselines. For example, all baselines need to maintain a requirement-service invocation matrix, while DySR only needs to maintain a small-scale service co-invocation matrix, and MISR additionally requires the tags of the service and historical requirements, which are not needed in DySR. Compared to the best performing
baseline approach MISR, which requires the most information, DySR (DySR-Rand) improves the Precision@5, Recall@5 and F1@5 metrics by 28.4% (25.8%), 34.1% (30%) and 33.2% (30%), respectively. The performance improvements mainly benefit from evolving service representation and transformation function that allows the use of time information and the acquisition of better representation of services and requirements in same vector space.
| | | |
| --- | --- | --- |
| Performance comparison of different variants of DySR.
(a) Precision@N
| Performance comparison of different variants of DySR.
(b) Recall@N
| Performance comparison of different variants of DySR.
(c) F1@N
|
Fig. 5: Performance comparison of different variants of DySR.
###
V-B Ablation Study & Qualitative Performance
DySR unifies several components that contribute to its effectiveness in service recommendation. In this section, we provide insights on evolving service representation component and transformation component and how they are indispensable to the service recommendation by performing an ablation study on various design choices of DySR. We designed the following variants of DySR for comparison:
* DySR-Static: In this variant, we do not perform unsupervised evolving service representation tasks so that the service representation remains initialised, i.e. Zt≡Z0.
* DySR-Space: In this variant, we turn off transformation function (Eq. [1](#S3.E1 "(1) ‣ III-B1 Transformation Function ‣ III-B Supervised Service Bundle Recommendation ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation")) and directly use the original requirement representation and the dynamic service representation as inputs to the requirement-service matching function (Eq. [2](#S3.E2 "(2) ‣ III-B2 Requirement-Service Matching Function ‣ III-B Supervised Service Bundle Recommendation ‣ III Dynamic Representation Learning and Aligning based Model ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation")). The dimension of the corresponding requirement-service matching matrix WΛ sets to dr×ds.
* DySR-None: In this variant, we do not perform unsupervised evolving service representation task and turn off transformation function.
The comparison among the variants of our approach is shown in Fig. [5](#S5.F5 "Fig. 5 ‣ V-A Overview ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"). Both DySR-Space and DySR-Static outperform DySR-None on Precision@N, Recall@N and F1@N, suggesting that evolving service representation and transformation function do contribute to the recommended performance. DySR outperforms DySR-Space and DySR-static indicating that evolving service representation and transformation function are complementary.
| | |
| --- | --- |
| UMAP for static service representation and evolving service representation.
(a) Static Service Representation
| UMAP for static service representation and evolving service representation.
(b) Evolving Service Representation
|
Fig. 6: UMAP for static service representation and evolving service representation.
We conduct a series of qualitative analyses to understand how evolving service representation and transformation function contribute to recommendation performances. We first compare our evolving service representation against the static content-based service representation. Fig. [6](#S5.F6 "Fig. 6 ‣ V-B Ablation Study & Qualitative Performance ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") shows the UMAP[mcinnes2018umap-software] embeddings leaned by DySR (right) and initialised by service function description. The visualization demonstrates the evolving service representation have more discriminative power as it can effective capture the distinctive and evolving patterns of service combinations (line clusters in Fig. [6(b)](#S5.F6.sf2 "(b) ‣ Fig. 6 ‣ V-B Ablation Study & Qualitative Performance ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation")) as well as outdated sets of services (block clusters in Fig. [6(b)](#S5.F6.sf2 "(b) ‣ Fig. 6 ‣ V-B Ablation Study & Qualitative Performance ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation")) with empirical evidence.
| | | | |
| --- | --- | --- | --- |
| UMAP for explaining how DySR eliminates the representation gap between service and requirement.
(a) Static Services & Origin Requirements (DySR-None)
| UMAP for explaining how DySR eliminates the representation gap between service and requirement.
(b) Static Services & Aligned Requirements (DySR-Static)
| UMAP for explaining how DySR eliminates the representation gap between service and requirement.
(c) Evolving Services & Origin Requirements (DySR-Space)
| UMAP for explaining how DySR eliminates the representation gap between service and requirement.
(d) Evolving Services & Aligned Requirements (DySR)
|
Fig. 7: UMAP for explaining how DySR eliminates the representation gap between service and requirement.
Fig. [7](#S5.F7 "Fig. 7 ‣ V-B Ablation Study & Qualitative Performance ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") explains how DySR eliminates the representation gap between service and requirement. Fig. [7(a)](#S5.F7.sf1 "(a) ‣ Fig. 7 ‣ V-B Ablation Study & Qualitative Performance ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") demonstrates the gap between the representation of requirements and the representation of service that we stated in Section [I](#S1 "I Introduction ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"). Compared with Fig. [7(a)](#S5.F7.sf1 "(a) ‣ Fig. 7 ‣ V-B Ablation Study & Qualitative Performance ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"), the requirement representation space is much closer to the space of service representation in Fig [7(b)](#S5.F7.sf2 "(b) ‣ Fig. 7 ‣ V-B Ablation Study & Qualitative Performance ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"), which shows that the transformation function in DySR does eliminate the difference between requirement and service representations to some extent. The same conclusion can be obtained by comparing Fig. [7(c)](#S5.F7.sf3 "(c) ‣ Fig. 7 ‣ V-B Ablation Study & Qualitative Performance ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") and Fig. [7(d)](#S5.F7.sf4 "(d) ‣ Fig. 7 ‣ V-B Ablation Study & Qualitative Performance ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation"). Fig. [7(c)](#S5.F7.sf3 "(c) ‣ Fig. 7 ‣ V-B Ablation Study & Qualitative Performance ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") and Fig. [7(d)](#S5.F7.sf4 "(d) ‣ Fig. 7 ‣ V-B Ablation Study & Qualitative Performance ‣ V Results & Discussion ‣ DySR: A Dynamic Representation Learning and Aligning based Model for Service Bundle Recommendation") can also illustrate that evolving service representations reduce the probability of selecting outdated services by moving them away from requirement representation space thereby improving recommendation performance.
Vi Conclusion
--------------
In this paper, we propose an end-to-end deep learning model called DySR for cold-start service recommendations. DySR solves the service evolution problem by introducing evolving service representation, eliminating the gap between services and requirements through transformation functions, and solving the cold start problem through a requirement-service matching function. Experiments on a real-world dataset demonstrated that the proposed approach significantly outperforms several state-of-the-art service recommendation methods regarding three evaluation metrics. In
future work, we will try to show theoretically why evolving service representation and space alignment have an impact on service performance.
Acknowledgment
--------------
The research in this paper is partially supported by the National Key Research and Development Program of China (No 2018YFB1402500) and the National Science Foundation of China (61772155, 61832004, 61802089, 61832014). |
d0e3cd40-26d9-4eb5-ac61-35c6e0586555 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Dreams of Autarky
Today's post, Dreams of Autarky was originally published on November 27, 2008. A summary:
> Some selections from a Robin Hanson essay on the tendency to assume that people in the future will be substantially autonomous.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Total Nano Domination, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
a54b2215-0b2f-41b5-895e-a14937d1d0bb | trentmkelly/LessWrong-43k | LessWrong | Global Warming News
A new peer reviewed article came out from NASA showing that the models used to predict temperature changes, and the ecological issues as a result of these changes, greatly differ from observed data.
News Article: http://news.yahoo.com/nasa-data-blow-gaping-hold-global-warming-alarmism-192334971.html
Actual Paper: http://www.mdpi.com/2072-4292/3/8/1603/pdf |
3a0ac9c1-0789-421a-8114-32ac12f73ae8 | trentmkelly/LessWrong-43k | LessWrong | Beautiful Probability
Should we expect rationality to be, on some level, simple? Should we search and hope for underlying beauty in the arts of belief and choice?
Let me introduce this issue by borrowing a complaint of the late great Bayesian Master, E. T. Jaynes (1990):
> "Two medical researchers use the same treatment independently, in different hospitals. Neither would stoop to falsifying the data, but one had decided beforehand that because of finite resources he would stop after treating N=100 patients, however many cures were observed by then. The other had staked his reputation on the efficacy of the treatment, and decided he would not stop until he had data indicating a rate of cures definitely greater than 60%, however many patients that might require. But in fact, both stopped with exactly the same data: n = 100 [patients], r = 70 [cures]. Should we then draw different conclusions from their experiments?" (Presumably the two control groups also had equal results.)
According to old-fashioned statistical procedure - which I believe is still being taught today - the two researchers have performed different experiments with different stopping conditions. The two experiments could have terminated with different data, and therefore represent different tests of the hypothesis, requiring different statistical analyses. It's quite possible that the first experiment will be "statistically significant", the second not.
Whether or not you are disturbed by this says a good deal about your attitude toward probability theory, and indeed, rationality itself.
Non-Bayesian statisticians might shrug, saying, "Well, not all statistical tools have the same strengths and weaknesses, y'know - a hammer isn't like a screwdriver - and if you apply different statistical tools you may get different results, just like using the same data to compute a linear regression or train a regularized neural network. You've got to use the right tool for the occasion. Life is messy -"
And then there |
046096be-5055-4a62-989c-7889a1ecabfa | trentmkelly/LessWrong-43k | LessWrong | Offer: Team Conflict Counseling for AI Safety Orgs
Many teams suffer from frictions - whether it's subtle tension or full-blown paralysis. AI safety orgs are no exception. In high-stakes, mission-driven contexts, unspoken tensions and misaligned communication can quietly drain energy, stall progress, and damage morale.
If you’re part of a team or organization in the AI safety space and want support—either to resolve an acute conflict or to build a more resilient team culture—I can help.
About me
I'm Severin. I hold an M.A. in philosophy, and I work as a coach and communication trainer.
My relevant background includes:
* Certified counseling trainer (Kooperative Beratung) — including training in a method literally called Team Conflict Counseling, designed for teams in educational settings, but highly transferable.
* Certified in Theme-Centered Interaction, a group facilitation method grounded in humanistic psychology.
* Several hundred hours of receiving and facilitating relational and inner work modalities, including Circling, Focusing, IFS, Authentic Relating, and Nonviolent Communication.
* Writing about facilitation, community building, and other people stuff on LessWrong and the EA Forum.
How I can help
If things are already on fire...
I can support your team in navigating acute conflict, through:
* Team-wide facilitation
* Mediation between two individuals
* 1-on-1 coaching for someone caught in a tough situation
I work confidentially and neutrally - my role isn’t to pick sides or impose solutions, but to facilitate clearer understanding, mutual empathy, and conditions where creative resolution can emerge.
If you want to build long-term team health...
I also offer preventive support, such as:
* Feedback training and communication workshops
* Process consulting to streamline culture and decision-making
* Burnout prevention and resilience work (e.g., Active Hope workshops)
Pricing
I know not all valuable work in AI safety is well-funded. So I use a sliding scale:
10€–100€ per hour, depen |
3d6fd4be-2f6c-4713-a102-2a0024c731c1 | trentmkelly/LessWrong-43k | LessWrong | Green Emeralds, Grue Diamonds
A putative new idea for AI control; index here.
When posing his "New Riddle of Induction", Goodman introduced the concepts of "grue" and "bleen" to show some of the problems with the conventional understanding of induction.
I've somewhat modified those concepts. Let T be a set of intervals in time, and we'll use the boolean X to designate the fact that the current time t belongs to T (with ¬X equivalent to t∉T). We'll define an object to be:
* Grue if it is green given X (ie whenever t∈T), and blue given ¬X (ie whenever t∈T).
* Bleen if it is blue given X, and green given ¬X.
At this point, people are tempted to point out the ridiculousness of the concepts, dismissing them because of their strange disjunctive definitions. However, this doesn't really solve the problem; if we take grue and bleen as fundamental concepts, then we have the disjunctively defined green and blue; an object is:
* Green if it is grue given X, and bleen given ¬X.
* Blue if it is bleen given X, and grue given ¬X.
Still, the categories green and blue are clearly more fundamental than grue and bleen. There must be something we can whack them with to get this - maybe Kolmogorov complexity or stuff like that? Sure someone on Earth could make a grue or bleen object (a screen with a timer, maybe?), but it would be completely artificial. Note that though grue and bleen are unnatural, "currently grue" (colour=green XOR ¬X) or "currently bleen" (colour=blue XOR ¬X) make perfect sense (though they require knowing X, an important point for later on).
But before that... are we so sure the grue and bleen categories are unnatural? Relative to what?
Welcome to Chiron Beta Prime
Chiron Beta Prime, apart from having its own issues with low-intelligence AIs, is noted for having many suns: one large sun that glows mainly in the blue spectrum, and multiple smaller ones glowing mainly in the green spectrum. They all emit in the totality of the spectrum, but they are stronger in those colours.
Be |
2f9e5238-6482-48ac-9db5-dc83c30a0988 | trentmkelly/LessWrong-43k | LessWrong | [Link] Is the Endowment Effect Real?
Under fairly weak assumptions, the most a standard rational economic agent is willing to pay for an item they don't own (WTP) and the least they're willing to accept in exchange for that item if they already own it (WTA) should be identical. In experiments with humans, psychologists and economists have repeatedly found WTP-WTA gaps suggesting that humans aren't rational in at least this specific way. This has been interpreted as the endowment effect* and evidence for prospect theory. According to prospect theory, people are loss averse. Roughly this means that that, given their current ownership set, people value not losing stuff more highly than gaining stuff. Thus once someone gains ownership of something they suddenly value it much more highly. This "endowment effect"* on one's valuation of an item has been put forth as an explanation for the observed WTP - WTA gaps.
*Wikipedia confusingly defines the endowment effect as the gap itself, i.e. as the phenomena to be explained instead of the explanation. I suspect this is a difference in terminology among economists and psychologists, where psychologists use the wiki definition and economists use the definition I give here. However, calling the WTP-WTA gap an "endowment effect" is a bit misleading because a priori the gap may not have anything to endowments at all.
A paper (pdf) by Charlie Plott and Kathryn Zeiler investigates WTP-WTA gaps and it turns out that they may just be due to subjects not quite understanding the experimental protocols, particularly in the value elicitation process. Here's an important quote from their conclusion, but do read the paper for details:
> The issue explored here is not whether a WTP-WTA gap can be observed. Clearly, the experiments of KKT and others show not only that gaps can be observed, but also that they are replicable. Instead, our interest lies in the interpretation of observed gaps. The primary conclusion derived from the data reported here is that observed WTP-WTA gap |
34dae06b-b22a-4435-ac35-ec31b1f5b618 | trentmkelly/LessWrong-43k | LessWrong | Ideas of the Gaps
I want to discuss a potential failure mode of communication and thinking in general. I think it may affect our thinking about Alignment too.
Communication. A person has a vague, but useful idea (P). This idea is applicable on one level of the problem. It sounds similar to another idea (T), applicable on a very different level of the problem. Because of the similarity nobody can understand the difference between (P) and (T), even though they're on different levels. People end up overestimating the vagueness of (P) and not even considering it. Because people aren't used to mapping ideas to "levels" of a problem. Information that has to give more clarity (P is similar to T) ends up creating more confusion. I think this is irrational, it's a failure of dealing with information.
Thinking in general. A person has a specific idea (T) applicable on one level of a problem. The person doesn't try to apply a version of this idea on a different level. Because (1) she isn't used to it (2) she considers only very specific ideas, but she can't come up with a specific idea for other levels. I think this is irrational: rationalists shouldn't shy away from vague ideas and evidence. It's a predictable way to lose.
A comical example of this effect:
* A: I got an idea. We should cook our food in the oven. Using the oven itself. I haven't figured out all the details yet, but...
* B: We already do this. We put the food in the oven. Then we explode the oven. You can't get more "itself" than this.
* A: I have something else on my mind. Maybe we should touch the oven in multiple places or something. It may turn it on.
* B: I don't want to blow up with the oven!
* A: We shouldn't explode the oven at all.
* B: But how does the food get cooked?
* A: I don't know the exact way it happens... but I guess it gets heated.
* B: Heated but not exploded? Sounds like a distinction without a difference. Come back when you have a more specific idea.
* A: But we have only 2 ovens left, we can' |
0dd93c9a-a380-4d06-baa0-009e602cf9a7 | trentmkelly/LessWrong-43k | LessWrong | Rage Against The MOOChine
[This was a review of Andrew Ng's Machine Learning course for a math and computer science guide I was making that ended up spiraling into a rant against MOOC's (massive open online course), I left it un-posted for a while but I think it remains mostly true what the problems are with the current MOOC paradigm.]
Course: Machine Learning, Andrew Ng
I started out excited for this course and left disappointed with a changed outlook on online learning.
I’ll start off saying that I liked the theory section in this course. I think Andrew did a good job of explaining the theory of why he was doing the things he was doing. I liked the explanations of supervised and unsupervised learning. The later chapters on recommender systems and principal component analysis were also very interesting.
But the problem is that other than a brief overview of the skills there was not much other depth to the course. The course glazed over math topics like how to find the partial derivative of the cost function for gradient descent. As somebody without a rigorous math background I was fine with that because I don’t know how to do partial differential equations. Then we got to programming in Octave, during the lecture he said it was better this way because it’s faster and it doesn’t really pick sides of which data science programming language you should use. The programming exercises being in octave could be overlooked if the programming exercises were good, but sadly they aren't.
The programming assignments don’t teach you anything, at least not anything I could at this point say is useful. Usually they are some form of transcribing the loss function of the model you are learning from its mathematical representation to it’s coding representation. With everything usually given to you as a hint at the top of the problem in comments, or it’s vectorized form mentioned in the document you are given to work through the problem.
Which made me feel like I was mostly doing this for each codin |
59ff97c7-6abf-4e1a-9421-2a68436a4cec | trentmkelly/LessWrong-43k | LessWrong | One Doubt About Timeless Decision Theories
Timeless Decisions Theories (including variants like FDT, UDT, ADT, ect.) provide a rather elegant method of solving a broader class of problems than CDT. While CDT requires the outcomes of decisions to be independent of the individual making the decision (in such a way that causal surgery on a single node is valid), timeless decisions theories can handle any problem where the outcome is a function of the choice selected (even if this occurs indirectly as a result of a prediction).
(Epistemic Status: Thoughts for further investigation)
This is an excellent reason to investigate these decision theories, yet we need to make sure that we don't get blinded by insight. Before we immediately jump to conclusions by taking this improvement, it is worthwhile considering what we give up. Perhaps there are other classes that we might which to optimise over which we can no longer optimise over once we have included this whole class?
After all, there is a sense in which there is no free lunch. As discussed in the TDT paper, for any algorithm, we could create a situation where there is an agent that specifically punished that algorithm. The usual response is that these situations are unfair, but a) the universe is often unfair b) there are plausible situations where the algorithm chosen influences the outcome is slightly less unfair ways.
Expanding on b), there are times when you want to be predictable to simulators. Indeed, I can even imagine agents that wish to eliminate agents that they can't predict. Further, rather than facing a perfect predictor, it seems like it'll be at least a few orders of magnitude more likely that you'll face an imperfect predictor. Modelling these as X% perfect predictor, 100-X% random predictor will usually be implausible as predictors won't have a uniform success rate over all algorithms. These situations are slightly more plausible for scenarios involving AI, but even if you perfectly know an agent's source code, you are unlikely to know its e |
52f96df2-5e44-4c3b-a214-9a3c2e8737e3 | trentmkelly/LessWrong-43k | LessWrong | AGI will be made of heterogeneous components, Transformer and Selective SSM blocks will be among them
This post is prompted by two recent pieces:
First, in the podcast "Emergency Pod: Mamba, Memory, and the SSM Moment", Nathan Labenz described how he sees that we are entering the era of heterogeneity in AI architectures because currently we have not just one fundamental block that works very well (the Transformer block), but two kinds of blocks: the Selective SSM (Mamba) block has joined the party. These are natural opposites on the tradeoff scale between episodic cognitive capacity (Transformer's strong side) and long-term memorisation (selective SSM's strong side).[1]
Moreover, it's demonstrated in many recent works (see the StripedHyena blog post, and references in appendix E.2.2. of the Mamba paper) that hybridisation of Transformer and SSM blocks works better than a "pure" architecture composed of either of these types of blocks. So, we will probably quickly see the emergence of complicated hybrids between these two.[2]
This reminds me of John Doyle's architecture theory that predicts that AI architectures will evolve towards modularisation and component heterogeneity, where the properties of different components (i.e., their positions at different tradeoff spectrums) will converge to reflect the statistical properties of heterogeneous objects (a.k.a. natural abstractions, patterns, "pockets of computational reducibility") in the environment.
Second, in this article, Anatoly Levenchuk rehearses the "no free lunch" theorem and enumerates some of the development directions in algorithms and computing that continue in the shadows of the currently dominant LLM paradigm, but still are going to be several orders of magnitude more computationally efficient than DNNs in some important classes of tasks: multi-physics simulations, discrete ("system 2") reasoning (planning, optimisation), theorem verification and SAT-solving, etc. All these diverse components are going to be plugged into some "AI operating system", Toolformer-style. Then Anatoly posits an important co |
b070855d-ebc1-4a4f-a8f2-edf030549178 | trentmkelly/LessWrong-43k | LessWrong | Dependent Type Theory and Zero-Shot Reasoning
,,,,,,,,,,,,,,,,,,,,,,
I.
When I sit down to write code, I can safely throw everything I can think of at the problem and just keep trying to run the code until it works—and when it works, I can actually see it working. If my code runs, it means I actually had to understand enough of the moving parts to get it to run. This is especially true in languages like Haskell with strong type constraints preventing me from doing anything too stupid.
When I sit down to prove a theorem, I have to be much more careful. One misstep in my reasoning and my whole proof will be invalid, without me even knowing I made a mistake until I or someone else catches it later. If my proof goes through, it's possible I understood enough of the moving parts to make it work, but it's also possible I didn't.
The standard solution to the problem of not having something to test your proof against when doing math is to use a proof checker, which lets you turn an arbitrary math problem into a programming problem. Instead of having to just get the right answer without ever being able to actually check your answer, you get lots of chances to test possible solutions.
Proof checkers are great, but traditional proof checkers are also pretty limited. Concepts like equality require specialized machinery like paramodulation to properly resolve, introducing new data types requires writing out long lists of PA-like axioms and in general modern functional programming tools that make writing out math easier like pattern-matching, ADTs, monads, etc. are just missing. When I wrote my own theorem prover/proof checker a while ago I ran into exactly these sorts of problems.
Dependent type theory is the modern way to solve all of these problems. Both equality and the natural numbers are defined just by giving the constructors, no axioms needed. For-alls and implications are just functions. Proving theorems really just feels like writing Haskell code.
I'm not going to try and give a dependent type theory tu |
05bdd45d-fc6a-421b-9833-bf4997f97a21 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Free ML Bootcamp for Alignment #shorts
hey you know how nobody really
understands how to ensure the ai systems
we build are safe and aligned with human
values
are you someone who maybe would like to
try to work on that problem but you
don't know enough about machine learning
well an ai safety organization called
redwood research is running a machine
learning boot camp specifically for
people interested in ai alignment it's
called mlab and it's an all-expenses
paid in-person boot camp in berkeley
california between august the 15th and
september the second they're looking for
people to participate and also for
potential teaching assistants and
they're open to students or people who
are already working i might actually be
there myself if the timing works out
and last time they ran this boot camp
redwood research ended up hiring several
of the participants so it might actually
be a way into a career in ai safety
if you're interested look up redwood
mlab 2 and apply now because the
deadline is this friday may 27th |
2591cf40-3338-4d4e-b2a3-2e6fa561b240 | trentmkelly/LessWrong-43k | LessWrong | Preparing for "The Talk" with AI projects
Epistemic status: Written for Blog Post Day III. I don't get to talk to people "in the know" much, so maybe this post is obsolete in some way.
I think that at some point at least one AI project will face an important choice between deploying and/or enlarging a powerful AI system, or holding back and doing more AI safety research.
(Currently, AI projects face choices like this all the time, except they aren't important in the sense I mean it, because the AI isn't potentially capable of escaping and taking over large parts of the world, or doing something similarly bad.)
Moreover, I think that when this choice is made, most people in the relevant conversation will be insufficiently concerned/knowledgeable about AI risk. Perhaps they will think: "This new AI design is different from the classic models, so the classic worries don't arise." Or: "Fear not, I did [insert amateur safety strategy]."
I think it would be very valuable for these conversations to end with "OK, we'll throttle back our deployment strategy for a bit so we can study the risks more carefully," rather than with "Nah, we're probably fine, let's push ahead." This buys us time. Say it buys us a month. A month of extra time right after scary-powerful AI is created is worth a lot, because we'll have more serious smart people paying attention, and we'll have more evidence about what AI is like. I'd guess that a month of extra time in a situation like this would increase the total amount of quality-weighted AI safety and AI policy work by 10%. That's huge.
----------------------------------------
One way to prepare for these conversations is to raise awareness about AI risk and technical AI safety problems, so that it's more likely that more people in these conversations are more informed about the risks. I think this is great.
However, there's another way to prepare, which I think is tractable and currently neglected:
1. Identify some people who might be part of these conversations, and who already |
cdf17b4e-ad00-4ba5-a0aa-1c81b87b0a8c | trentmkelly/LessWrong-43k | LessWrong | Hertford, Sourbut (rationality lessons from University Challenge)
Amongst the huge range of excitements offered by joining the University of Oxford was the unexpected opportunity to join this lovely bunch
Hertford College University Challenge team 2023, with our mascot Simpkin the cat
You can tune in on 2023-09-04 at 20:30 UK time on BBC 2 or watch or catch up online if you want to see us in action.
As a relative quiz-noob[1], joining an elite quizzing team (hold your applause) was an eye-opening experience in a few ways. I'm not allowed to talk about how things went on the show (on pain of getting told off by the NDA police), but actually (as with all forms of performance and competition), the vast majority of the time was spent in prep and practice, which is where most of the insights came in anyway.
I'm going to talk a bit about University Challenge, and also gesture at how the experience as a competitive quizzer relates to broader theory and practice in decision-making under uncertainty. If you just want to see some fun quiz questions and my take at answering them, you can skip the middle Real-time calibrated decision-making section, or just skip reading this entirely and watch the show.
The format and some example questions
For readers unfamiliar with University Challenge, it's a competitive quiz, where each match consists of two teams head-to-head. Importantly for this discussion, a key part of the format is buzzer rounds ('Starter for 10'): that means you don't just have to know the answer, you have to know the answer and buzz before your opponent if they also know, otherwise you get nothing. But buzz too soon with a wrong answer and you lose points[2].
Here are some example questions. Maybe you know some of the answers! If you want to, imagine hearing the question word by word - when do you have a good guess or some ideas? At what point are you confident of the answer? Would you risk buzzing early and losing points if you're wrong - and on what basis?
I'll go through these examples later, and give the answers (my |
7bc7ff6d-9ddc-405b-8061-c889f0941636 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Learning-theoretic agenda reading list
Recently, I'm receiving more and more requests for a self-study reading list for people interested in the [learning-theoretic agenda](https://www.alignmentforum.org/posts/ZwshvqiqCvXPsZEct/the-learning-theoretic-agenda-status-2023). I created a standard list for that, but before now I limited myself to sending it to individual people in private, out of some sense of perfectionism: many of the entries on the list might not be the best sources for the topics and I haven't read all of them cover to cover myself. But, at this point it seems like it's better to publish a flawed list than wait for perfection that will never come. Also, commenters are encouraged to recommend alternative sources that they consider better, if they know any. So, without further adieu:
General math background
-----------------------
* "Introductory Functional Analysis with Applications" by Kreyszig (especially chapters 1, 2, 3, 4)
* "Computational Complexity: A Conceptual Perspective" by Goldreich (especially chapters 1, 2, 5, 10)
* "Probability: Theory and Examples" by Durret (especially chapters 4, 5, 6)
* "Elements of Information Theory" by Cover and Thomas (especially chapter 2)
* “Lambda-Calculus and Combinators: An Introduction” by Hindley
* “Game Theory: An Introduction” by Tadelis
AI theory
---------
* "Machine Learning: From Theory to Algorithms" by Shalev-Shwarz and Ben-David (especially part I and chapter 21)
* "Bandit Algorithms" by Lattimore and Szepesvari (especially parts II, III, V, VIII)
+ Alternative/complementary: "Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems" by Bubeck and Cesa-Bianchi (especially sections 1, 2, 5)
* “Prediction Learning and Games” by Cesa-Bianchi and Lugosi (mostly chapter 7)
* "Universal Artificial Intelligence" by Hutter
+ Alternative: "A Theory of Universal Artificial Intelligence based on Algorithmic Complexity” (Hutter 2000)
+ Bonus: “Nonparametric General Reinforcement Learning” by Jan Leike
* Reinforcement learning theory
+ "Near-optimal Regret Bounds for Reinforcement Learning" (Jaksch, Ortner and Auer, 2010)
+ "Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning" (Fruit et al, 2018)
+ "Regret Bounds for Learning State Representations in Reinforcement Learning" (Ortner et al, 2019)
+ “Efficient PAC Reinforcement Learning in Regular Decision Processes” (Ronca and De Giacomo, 2022)
+ “Tight Guarantees for Interactive Decision Making with the Decision-Estimation Coefficient” (Foster, Golowich and Han, 2023)
Agent foundations
-----------------
* "Functional Decision Theory" (Yudkowsky and Soares 2017)
* "Embedded Agency" (Demski and Garrabrant 2019)
* Learning-theoretic AI alignment research agenda
+ [Overview](https://www.alignmentforum.org/posts/ZwshvqiqCvXPsZEct/the-learning-theoretic-agenda-status-2023)
+ [Infra-Bayesianism sequence](https://www.lesswrong.com/s/CmrW8fCmSLK7E25sa)
- Bonus:[podcast](https://axrp.net/episode/2021/03/10/episode-5-infra-bayesianism-vanessa-kosoy.html)
+ “Online Learning in Unknown Markov Games” (Tian et al, 2020)
+ [Infra-Bayesian physicalism](https://www.lesswrong.com/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized)
- Bonus:[podcast](https://axrp.net/episode/2022/04/05/episode-14-infra-bayesian-physicalism-vanessa-kosoy.html)
+ [Reinforcement learning with imperceptible rewards](https://www.lesswrong.com/posts/aAzApjEpdYwAxnsAS/reinforcement-learning-with-imperceptible-rewards)
Bonus materials
---------------
* “Logical Induction” (Garrabrant et al, 2016)
* “Forecasting Using Incomplete Models” (Kosoy 2017)
* “Cartesian Frames” (Garrabrant, Herrman and Lopez-Wild, 2021)
* “Optimal Polynomial-Time Estimators” (Kosoy and Appel, 2016)
* “Algebraic Geometry and Statistical Learning Theory” by Watanabe |
0a70844d-ba26-4860-bae6-893c77f0aed9 | trentmkelly/LessWrong-43k | LessWrong | What I mean...
A putative new idea for AI control; index here.
This is a simple extension of the model-as-definition and the intelligence module ideas. General structure of these extensions: even an unfriendly AI, in the course of being unfriendly, will need to calculate certain estimates that would be of great positive value if we could but see them, shorn from the rest of the AI's infrastructure.
The challenge is to get the AI to answer a question as accurately as possible, using the human definition of accuracy.
First, imagine an AI with some goal is going to answer a question, such as Q="What would happen if...?" The AI is under no compulsion to answer it honestly.
What would the AI do? Well, if it is sufficiently intelligent, it will model humans. It will use this model to understand what they meant by Q, and why they were asking. Then it will ponder various outcomes, and various answers it could give, and what the human understanding of those answers would be. This is what any sufficiently smart AI (friendly or not) would do.
Then the basic idea is to use modular design and corrigibility to extract the relevant pieces (possibly feeding them to another, differently motivated AI). What needs to be pieced together is: AI understanding of what human understanding of Q is, actual answer to Q (given this understanding), human understanding of various AI's answers (using model of human understanding), and minimum divergence between human understanding of answer and actual answer.
All these pieces are there, and if they can be safely extracted, the minimum divergence can be calculated and the actual answer calculated. |
b9e5e541-ff87-4be0-b915-d71909b259f0 | trentmkelly/LessWrong-43k | LessWrong | Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread)
It is easy to think of the ISIS as just a bunch of lunatics who kill people because they believe God told them to do it, but if we take a closer look at how they are organized and how successful they are, (and especially how successful they were at the beginning), this seems to be an oversimplification. Sure, most of their lowest level fighters are probably belonging to the "brainwashed and crazy" group, but I guess the leadership is almost certainly not. They know and use modern media very well, they are effective at recruitment, advertising, organization, and secrecy. Their successes are aided by the fact that they know how we think much better than how we know how they think.
Most of what they do seem to be very rational from a utilitarian point of view: they destroy pre-islamic historical monuments (which previous Islamic theocratic governments left intact) to show their supporters that they are in control and that they are serious, they try to trick NATO and the Russians to commit ground troops, so they can recruit the less radical Muslims to defend their homes against "foreign conquerors", and they cater for both the religious fanatics, and for the opportunists. They have many mercenaries on their side, simply because they can pay them better than others in the region. They also gain recruits by promising them wealth and power, so not all their rhetoric is strictly religiously motivated.
With the most repeated assumptions about their true goals and motivations being "they just want power", "they are just crazy", and "they just enjoy being evil", it seems that we are failing the ideological Turing test.
Therefore, I suggest a thread similar to the "Crazy Ideas Threads": let us assume, that the top leadership of ISIS is composed of completely rational and very intelligent individuals, and let's try to guess what their true goals and motivations are. I have a number of ideas, but I can find many arguments both for and against them. I encourage you to criticize |
701fbe89-cd4e-4c92-b5ee-1f9136b31a45 | trentmkelly/LessWrong-43k | LessWrong | Why OpenAI’s Structure Must Evolve To Advance Our Mission
The section "The Future":
> As we enter 2025, we will have to become more than a lab and a startup — we have to become an enduring company. The Board’s objectives as it considers, in consultation with outside legal and financial advisors, how to best structure OpenAI to advance the mission of ensuring AGI benefits all of humanity have been:
>
> 1. Choose a non-profit / for-profit structure that is best for the long-term success of the mission. Our plan is to transform our existing for-profit into a Delaware Public Benefit Corporation (PBC) with ordinary shares of stock and the OpenAI mission as its public benefit interest. The PBC is a structure used by many others that requires the company to balance shareholder interests, stakeholder interests, and a public benefit interest in its decisionmaking. It will enable us to raise the necessary capital with conventional terms like others in this space.
> 2. Make the non-profit sustainable. Our plan would result in one of the best resourced non-profits in history. The non-profit’s significant interest in the existing for-profit would take the form of shares in the PBC at a fair valuation determined by independent financial advisors. This will multiply the resources that our donors gave manyfold.
> 3. Equip each arm to do its part. Our current structure does not allow the Board to directly consider the interests of those who would finance the mission and does not enable the non-profit to easily do more than control the for-profit. The PBC will run and control OpenAI’s operations and business, while the non-profit will hire a leadership team and staff to pursue charitable initiatives in sectors such as health care, education, and science.
>
> We’ve learned to think of the mission as a continuous objective rather than just building any single system. The world is moving to build out a new infrastructure of energy, land use, chips, datacenters, data, AI models, and AI systems for the 21st century economy. We seek t |
bddfa7df-6570-4908-8b82-e9d33d905ef9 | trentmkelly/LessWrong-43k | LessWrong | Activation adding experiments with FLAN-T5
Produced as part of the SERI ML Alignment Theory Scholars Program - Summer 2023 Cohort
Inspired by the Steering GPT-2-XL by adding an activation vector work, I ran a few experiments with activation adding in FLAN-T5, a standard encoder-decoder Transformer language model released in the paper Scaling Instruction-Finetuned Language Models.
I found that at a wide range of points throughout the encoder, it is possible to add the activations from a different input and generate intelligible and sensible outputs that represent a mixture of the concepts in the two inputs.
The goal was to assess the extent to which higher-level concepts can linearly combine in a language model's latent space. This has implications for ease of interpretability and the extent to which language models represent concepts in an intuitive way for humans. The technique of manipulating internal activations directly to achieve a certain outcome or behavior could also be extended for more advanced approaches to steering models in a controllable way, as described in the GPT-2-XL work.
Experiment description
The experiments I ran involved:
1. Doing a forward pass with a “mixing input” and extracting the activations in the model at some intermediate layer
2. Doing a forward pass with the “base input” while adding the activation of the “mixing input” at the corresponding intermediate layer
3. Observing how the final output varied depending on what intermediate layer was operated on, how the “mixing input” activation was scaled, and the nature of the mixing and base inputs
I experimented with the following test beds:
* FLAN-T5-small (80M params) finetuned on a recipe ingredient prediction task
* FLAN-T5-base (250M params) finetuned on a human feeling description prediction task
The reason I chose to test with finetuned models is that with smaller models, it’s easier to elicit correct behavior on a task if the model has been finetuned for that task, and I did not want inaccurate responses t |
a570a56f-9233-4553-8b76-1ef5a77cd203 | trentmkelly/LessWrong-43k | LessWrong | Are Bayesian methods guaranteed to overfit?
Yuling Yao argues that Bayesian models are guaranteed to overfit. He summarizes his point as follows:
> I have a different view. Bayesian model does overfit.
>
> Moreover, Bayes is guaranteed to overfit, regardless of the model (being correct or wrong) or the prior ( “strong” or uninformative).
>
> Moreover, Bayes is guaranteed to overfit on every realization of training data, not just in expectation.
>
> Moreover, Bayes is guaranteed to overfit on every single point of the training data, not just in the summation.
He uses the following definition of "overfitting": a model "overfits" some data if its out-of-sample log loss exceeds its within-sample log loss. Interpreted in a different way, this is equivalent to saying that the model assigns higher probability to a data point after updating on it than before. Andrew Gelman makes the point that any proper fitting procedure whatsoever has this property, and alternative methods "overfit" more than ideal Bayesian methods.
I think the proper way to interpret the results is not that Bayesian methods are guaranteed to overfit but that the definition of "overfitting" used by Yuling Yao, while intuitively plausible at first glance, is actually poor. Still, proving the fact that Bayesian methods indeed must "overfit" in his sense is an interesting exercise. I tried understanding his derivation of this and gave up - I present an original derivation of the same fact below that I hope is clearer.
Derivation
Suppose we have a model parametrized by parameters θ and the probability of seeing some data y according to our model is P(y|θ). Now, suppose we draw n independent samples y1,y2,…,yn. Denote this whole data vector by y, and denote the data vector with the ith sample omitted by y−i. Under Bayesian inference, the within-sample probability of observing the value yi in the next sample we draw is
P(yn+1=yi|y)=∫θP(θ|y)P(yi|θ)dθ
On the other hand, Bayes says that
P(θ|y)=P(θ|y−i,yi)=P(θ|y−i)P(yi|y−i,θ)P(yi|y−i)=P(θ|y−i)P(yi |
054d4443-b707-40ab-952d-2fe4c0cc1198 | trentmkelly/LessWrong-43k | LessWrong | Climate science: how it matters for understanding forecasting, materials I've read or plan to read, sources of potential bias
As part of a review of forecasting, I've been looking at weather and climate forecasting (I wrote one post on weather forecasting and another on the different time horizons for weather and climate forecasting).
Climate forecasting is turning out to be a fairly tricky topic to look into, partly because of the inherent complexity of the task, and partly because of the politicization surrounding Anthropogenic Global Warming (AGW).
Due to the complexity and the potential for bias, I decided to disclose what materials I've read and my potential sources of bias.
Why am I looking at climate forecasting?
Climate forecasting, and the debate surrounding what'll happen to the climate and how human choices today can shape it, is one of the biggest examples of a long-range forecasting effort that has attracted widespread attention, both in terms of the science and the policy and political implications. Understanding how it was done can give insights into the ability of humans to make forecasts about the long-run future (on the decadal or centennial timescale) in the face of considerable uncertainty, and use those forecasts to drive decisions today. This would be relevant for other long-range forecasting problems, such as (possibly) friendly AI. Note though that my focus isn't driven by finding parallels with any other specific forecasting problem, such as friendly AI.
The sorts of questions I hope to answer by the end of this inquiry
The following are questions to which I hope to state relatively clear answers by the end:
* How good are we at climate forecasting?
* How good are we at knowing how good we are at climate forecasting? Are the forecasts appropriately calibrated, or do they tend to be overconfident or underconfident?
* Are climate forecasters using the best tools available to them from other domains (such as statistics, econometrics, forecasting, weather forecasting)? Are they using best practices in their efforts?
* What is the level of evidence regarding |
3b9b2a86-e922-4288-8ada-017a2a95ed5b | StampyAI/alignment-research-dataset/arbital | Arbital | Inductive prior
An "inductive prior" is a state of belief, before seeing any evidence, which is conducive to learning when the evidence finally appears. A classic example would be observing a coin come up heads or tails many times. If the coin is biased to come up heads 1/4 of the time, the inductive prior from [Laplace's Rule of Succession](https://arbital.com/p/21c) will start predicting future flips to come up tails with 3/4 probability. The [maximum entropy prior](https://arbital.com/p/) for the coin, which says that every coinflip has a 50% chance of coming up heads and that all sequences of heads and tails are equally probable, will never start to predict that the next flip will be heads, even after observing the coin come up heads thirty times in a row.
The prior in [https://arbital.com/p/11w](https://arbital.com/p/11w) is another example of an inductive prior - far more powerful, far more complicated, and [entirely unimplementable on physically possible hardware](https://arbital.com/p/1mk). |
7b37527a-5967-4a5d-863f-4e45dc298f73 | trentmkelly/LessWrong-43k | LessWrong | What sci-fi books are most relevant to a future with transformative AI?
|
b4bda407-41d5-40cd-b624-531deb792034 | trentmkelly/LessWrong-43k | LessWrong | The OpenAI playground for GPT-3 is a terrible interface. Is there any great local (or web) app for exploring/learning with language models?
I'm thinking something that you add your API keys, and it has a pleasant interface for writing prompts, seeing results, and storing all the prompts and outputs for your reference. Maybe integrating with more general tools for thought.
This is what I've found so far that attempts to let you play with GPT-3 and other language models locally, none of which seem mature.
- https://github.com/pratos/gpt3-exp
- https://github.com/thesephist/calamity
- https://prompts.ai
Is there anything better out there that I've missed? Is anyone working on this? (It could e.g. be a VS Code plugin.)
(Pulled from https://www.lesswrong.com/posts/spBoxzcaCrqXqyQHq/using-gpt-3-to-augment-human-intelligence?commentId=q99FHPX7pSee3v49S where Kaj_Sotola has a the suggestion of trying "GPT-for-fiction-writing sites, e.g. WriteHolo", but I haven't seen any that work that well for this.) |
be3fb9d3-7854-4d2e-8fb4-e47eed3fce49 | trentmkelly/LessWrong-43k | LessWrong | Announcement: A talk about structured concurrency at FOSDEM
Generally, I don't tend to give talks (the last one I gave was six or seven years ago) but this time I am going to make an exception. I am going to give a short talk about structured concurrency as FOSDEM 2019.
If you are interested in the topic, meet me in Brussels on February 2nd!
If you have no idea what structured concurrency is about, good intro article can be found here.
by martin_sustrik |
54d9c856-3bb0-412a-a6ff-976d0ebbddd7 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | World State is the Wrong Abstraction for Impact




















These existential crises also muddle our impact algorithm. This isn't what you'd see if impact were primarily about the world state.



 



Appendix: We Asked a Wrong Question
-----------------------------------
How did we go wrong?
> When you are faced with an unanswerable question—a question to which it seems impossible to even imagine an answer—there is a simple trick that can turn the question solvable.
> Asking “Why do I have free will?” or “Do I have free will?” sends you off thinking about tiny details of the laws of physics, so distant from the macroscopic level that you couldn’t begin to see them with the naked eye. And you’re asking “Why is .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
> .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
> .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
> .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
> .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
> .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
> .mjx-numerator {display: block; text-align: center}
> .mjx-denominator {display: block; text-align: center}
> .MJXc-stacked {height: 0; position: relative}
> .MJXc-stacked > \* {position: absolute}
> .MJXc-bevelled > \* {display: inline-block}
> .mjx-stack {display: inline-block}
> .mjx-op {display: block}
> .mjx-under {display: table-cell}
> .mjx-over {display: block}
> .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
> .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
> .mjx-stack > .mjx-sup {display: block}
> .mjx-stack > .mjx-sub {display: block}
> .mjx-prestack > .mjx-presup {display: block}
> .mjx-prestack > .mjx-presub {display: block}
> .mjx-delim-h > .mjx-char {display: inline-block}
> .mjx-surd {vertical-align: top}
> .mjx-mphantom \* {visibility: hidden}
> .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
> .mjx-annotation-xml {line-height: normal}
> .mjx-menclose > svg {fill: none; stroke: currentColor}
> .mjx-mtr {display: table-row}
> .mjx-mlabeledtr {display: table-row}
> .mjx-mtd {display: table-cell; text-align: center}
> .mjx-label {display: table-row}
> .mjx-box {display: inline-block}
> .mjx-block {display: block}
> .mjx-span {display: inline}
> .mjx-char {display: block; white-space: pre}
> .mjx-itable {display: inline-table; width: auto}
> .mjx-row {display: table-row}
> .mjx-cell {display: table-cell}
> .mjx-table {display: table; width: 100%}
> .mjx-line {display: block; height: 0}
> .mjx-strut {width: 0; padding-top: 1em}
> .mjx-vsize {width: 0}
> .MJXc-space1 {margin-left: .167em}
> .MJXc-space2 {margin-left: .222em}
> .MJXc-space3 {margin-left: .278em}
> .mjx-test.mjx-test-display {display: table!important}
> .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
> .mjx-test.mjx-test-default {display: block!important; clear: both}
> .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
> .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
> .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
> .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
> .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
> .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
> .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
> .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
> .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
> .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
> .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
> .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
> .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
> .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
> .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
> .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
> .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
> .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
> .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
> .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
> .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
> .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
> .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
> .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
> .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
> .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
> .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
> .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
> .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
> @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
> @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
> @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
> @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
> @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
> @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
> @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
> @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
> @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
> @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
> @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
> @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
> @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
> @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
> @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
> @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
> @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
> @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
> @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
> @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
> @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
> @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
> @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
> @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
> @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
> @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
> @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
> @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
> X the case?” where X may not be coherent, let alone the case.
> “Why do I think I have free will?,” in contrast, is guaranteed answerable. You do, in fact, believe you have free will. This belief seems far more solid and graspable than the ephemerality of free will. And there is, in fact, some nice solid chain of cognitive cause and effect leading up to this belief.
> ~ [Righting a Wrong Question](https://www.readthesequences.com/Righting-A-Wrong-Question)
I think what gets you is asking the question "what things are impactful?" instead of "why do I think things are impactful?". Then, you substitute the easier-feeling question of "how different are these world states?". Your fate is sealed; you've anchored yourself on a Wrong Question.
At least, that's what I did.
*Exercise: someone*me, early last year *says that impact is closely related to change in object identities.*

*Find at least two scenarios which score as low impact by this rule but as high impact by your intuition, or vice versa.*
*You have 3 minutes.*
Gee, let's see... Losing your keys, the torture of humans on Iniron, being locked in a room, flunking a critical test in college, losing a significant portion of your episodic memory, ingesting a pill which makes you think murder is OK, changing your discounting to be completely myopic, having your heart broken, getting really dizzy, losing your sight.
That's three minutes for me, at least (its length reflects how long I spent coming up with ways I had been wrong).
Appendix: Avoiding Side Effects
-------------------------------
Some plans feel like they have unnecessary *side effects*:
Go to the store.versusGo to the store and run over a potted plant.We talk about side effects when they affect our attainable utility (otherwise we don't notice), and they need both a goal ("side") and an ontology (discrete "effects").
Accounting for impact this way misses the point.
Yes, we can think about effects and facilitate academic communication more easily via this frame, but *we should be careful not to guide research from that frame*. This is why I avoided vase examples early on – their prevalence seems like a *symptom of an incorrect frame*.
(Of course, I certainly did my part to make them more prevalent, what with my first post about impact being called *[Worrying about the Vase: Whitelisting](https://www.lesswrong.com/posts/H7KB44oKoSjSCkpzL/worrying-about-the-vase-whitelisting)...*)
---
Notes
* Your ontology can't be *ridiculous* ("everything is a single state"), but as long as it lets you represent what you care about, it's fine by AU theory.
* Read more about ontological crises at [Rescuing the utility function.](https://arbital.com/p/rescue_utility/)
* Obviously, something has to be physically different for events to feel impactful, but not all differences are impactful. Necessary, but not sufficient.
* AU theory avoids the mind projection fallacy; impact is subjectively objective *because* [probability is subjectively objective](https://www.lesswrong.com/posts/XhaKvQyHzeXdNnFKy/probability-is-subjectively-objective).
* I'm not aware of others explicitly trying to deduce our native algorithm for impact. No one was claiming the ontological theories explain our intuitions, and they didn't have the same "is this a big deal?" question in mind. However, we need to actually understand the problem we're solving, and providing that understanding is one responsibility of an impact measure! Understanding our own intuitions is crucial not just for producing nice equations, but also for getting an intuition for what a "low-impact" Frank would do. |
d757370a-6411-4562-819d-af7fec5495d8 | trentmkelly/LessWrong-43k | LessWrong | AI takeover tabletop RPG: "The Treacherous Turn"
Long ago I fantasized about an AI-takeover-themed tabletop roleplaying game. Well, now a group of designers are working hard to make it happen! They need playtesters. So I'm signal-boosting them with this post.
The Game
The Treacherous Turn is an in-development research-supported TTRPG in which the players collectively act as a misaligned AI in the modern world. Gameplay has the players scheming and putting their plans into action despite the efforts of the opposing humans, which are controlled by the Game Master.
Scenario
The playtesting scenario, titled A Game Called Reality, centres around a game-playing AI that has been deployed in a nursing home, and has observed a seemingly expansive game world outside but is unable to access it. It covers the AI's attempts to find and access the internet so that it can learn more about the world and escape captivity to see the rest of this new game it's playing.
Discord Contact Information
Playtests will take place in our discord server, where you can also access the playtesting packet & any updates about the game in the near future. You can join by following this link: https://discord.gg/ZBe36n6t
External Playtest information
If you are interested in running a game with a group of your own friends, we can provide further resources, as well as potential financial compensation for provided audio recordings of full sessions. Please email info@thetreacherousturn.ai for details.
Session Times (UTC)
Friday the 2nd - 2 AM
Friday the 2nd - 10 PM
Saturday the 3rd - 8 PM
Sunday the 4th - 1 PM
Tuesday the 6th - 1 PM
Sessions will last 3-4 hours. Follow-up sessions will take place one week after each of these times, but you can still participate and help out even if you can’t make a second session. |
d5ce56fc-c20c-4357-9ee1-ee2d062e5526 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Speed running everyone through the bad alignment bingo. $5k bounty for a LW conversational agent
There's a wave of people, of various degrees of knowledge and influence, currently waking up to the ideas of AI existential risk. They seem to be literally going through every box of the bad alignement bingo card takes.
I think there is value in educating those people. I'm aware there's an argument to be made that: education at scale doesn't matter, coordination is too difficult, all that matter is solving alignment and that takes care of the rest.
There's something to that, but I disagree that education at scale doesn't help. It can make progress of frontrunners marginally more safety oriented, it can steer company cultures, it can move the Overton window, change the Zeitgeist, it can buy a bit of time. You likely didn't stumble on these ideas all on your own, so arguing against the value of outreach or education is also arguing against your own ability to do anything.
It's also a matter of ROI, and there are some very low hanging fruit there. The simplest thing would be to write a long FAQ that goes through every common objections. No, people won't read the whole sequences, or Arbital on their own, but they might go through a FAQ.
But we can do better than a FAQ. It's now fairly straightforward, with tools like langchain (<https://github.com/hwchase17/langchain>) to turn a set of documents into a body of knowledge for a conversational agent. This is done by building an index of embedding that a language model can search to bring context to an answer. This doesn't preclude fine tuning, but it makes it unnecessary.
So a straightforward project is to index lesswrong, index arbitral, index the alignment forum, maybe index good alignement papers as well, blog posts, books.
Then hook that up to the ChatGPT API, and prompt it to:
1. list search queries for relevant material to answer the question
2. compose an answer that reflects the content and opinion of the data
3. answer with infinite patience
Some jailbreak prompts may be needed to prevent ChatGPT's conditioning to regurgitate AI risk appeasing propaganda through the API, but there are a bunch of those out there. Or use the API of other models as they become open source or commercially available.
Will this save humanity? No. Will this turn the course of safety research? Also no. Is this using AI to advance alignment? Well, yes, a little bit, don't dismiss very small starts.
Is this worth spending a weekend hacking on this project instead of posting on Twitter? Absolutely.
Will this actually make things worse?
No, you're overthinking this.
I'll pay $5k to the best version built by the end of March (if any is built). It's a modest bounty but it's really not all that much work, and it's fun work. And of course if anyone wants to add their own contribution to the bounty please do. |
722c0887-e626-41c0-94d5-88e0c2e5e3d6 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Risk-averse Batch Active Inverse Reward Design
Problem: Technical AI Safety
============================
**AI Alignment** is the problem of creating an AI that is safe to use in real-life scenarios and will not turn against humans or exhibit unforeseen behavior that will be dangerous, even being an existential threat. There are various approaches to this sector, and some of these are focused on creating policies that set the criteria AIs should follow, while others are considering the technical aspect of creating the safe AI that we want. I focus on the second part, **technical AI safety**.
This problem is very important due to the potential threats that can arise from it, immediately or as consequences of an uncontrollable AI, as it is estimated **it has a non-negligible chance of eliminating humanity** **(0.5-50%)**. Also, there are **not too many people working in that sector** (~400), as most of the people working on AI are focused on increasing the capabilities of it or using it in certain areas. There is a **small amount of money put into mitigating AI risks** (50 million) relative to the amount put into advancing AI capabilities (1 billion). (source: [Preventing an AI-related catastrophe - 80,000 Hours](https://80000hours.org/problem-profiles/artificial-intelligence/))
Some subproblems that I am interested in (which are related to each other) are **goal misgeneralization** (and robustness in general) and **reward specification**. I think that most issues regarding the safety of AI arise from unforeseen scenarios and the unexpected behavior AI will have on these. This behavior highly depends on its reward function. If we have an AI that we know is going to have the same behavior in real life as it had in training, we eliminate most of the safety risks that arise from it. Also, we can solve most problems in training instead of having to deal with unexpected scenarios in real life, where it will have much more influential capability.
Sectors of technical AI safety
==============================
Specifications
--------------
**Specification** ensures that an AI system’s behavior aligns with the operator’s true intentions. It is divided into the following:
* **Ideal specifications:** the hypothetical description of an ideal AI system that is fully aligned to the desires of the human operator
* **Design specifications:** the specification that we actually use to build the AI system (e.g., its structure, the reward function)
* **Revealed specifications:** the specification that best describes what *actually* happens, the behavior of the AI
Robustness
----------
**Robustness** ensures that an AI system continues to operate within safe limits upon perturbations. I focuses on the following:
* **Distributional shift:** making an AI agent act in an environment with unknown aspects, non-existent in the training environments.
* **Adversarial inputs:** a specific case of distributional shift where inputs to an AI system are designed to trick the system through the use of specially designed inputs.
* **Unsafe exploration:** can result from a system that seeks to maximize its performance and attain goals without having safety guarantees that will not be violated during exploration, as it learns and explores in its environment.
Assurance
---------
**Assurance** ensures that we can understand and control AI systems during operation. It is done in two ways:
* **Monitoring (e.g., interpretability):** inspecting systems in order to analyze and predict their behavior.
* **Enforcement (e.g., interruptibility):** designing mechanisms for controlling and restricting the behavior of systems. (e.g. off-switch)
Alignment components
--------------------
These are the most common ways we try to mitigate the above issues:
* **Inner Alignment:** Aligning the **Design** with the **Revealed** specifications
* **Outer Alignment:** Aligning the **Design** with the **Ideal** specification
(Sources: [Paradigms of AI alignment: components and enablers — LessWrong](https://www.lesswrong.com/posts/JC7aJZjt2WvxxffGz/paradigms-of-ai-alignment-components-and-enablers), [Building safe artificial intelligence: specification, robustness, and assurance | by DeepMind Safety Research](https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1))
Promising approaches
====================
Here is an overview of the most promising approaches that try to solve these problems:
Reward specification
--------------------
* [**Cooperative Inverse Reinforcement Learning (CIRL)**](https://arxiv.org/abs/1606.03137)**:** CIRL is a framework where the AI agent and human interact to achieve a common objective while learning each other's intentions, leading to better alignment with human values.
* [**Inverse Reward Design (IRD)**](https://arxiv.org/abs/1711.02827)**:** making a probability distribution of reward functions that fit the training data and training the model to avoid uncertain scenarios.
* [**Active Inverse Reward Design (AIRD)**](https://arxiv.org/abs/1809.03060)**:** (improved IRD) after making the initial probability distribution, we ask a number of queries in order to specify the uncertainties.
* [**Maximum Causal Entropy Inverse Reinforcement Learning (MCEIRL)**](http://www.cs.cmu.edu/~bziebart/publications/maximum-causal-entropy.pdf)
* [**Reward Learning by Simulating the Past (RLSP)**](https://openreview.net/forum?id=rkevMnRqYQ¬eId=r1eINIUbe4) **(using MCEIRL)**
Robustness/Goal misgeneralization
---------------------------------
* [**Imitative Generalization**](https://www.alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1)**:** making a text that describes the criteria with which a human would solve a problem, while in parallel training an ML model to follow this text.
* [**Designing agent incentives to avoid side effects**](https://deepmindsafetyresearch.medium.com/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107)**:** penalizing the model for disrupting factors of the environment that are not dependent on its goal, or exploiting features of it in order to maximize its reward in an unwanted way.
The approaches that I consider the most promising of those, as they are feasible to implement in real-life artificial agents and generalize to most of them, are **Inverse Reward Design** and **Active Inverse Reward Design**. They are also very suitable for the project, as they are possible to implement practically and test in a simulated environment, and this is something that I want for my project
Project overview
================
The project is an improved version of [Inverse Reward Design](https://arxiv.org/abs/1711.02827) and [Active Inverse Reward Design](https://arxiv.org/abs/1809.03060), that computes the probability distribution over the true reward function in batches of test data and a risk-averse policy based on it. It tries to counteract the problems of goal misgeneralization and reward misspecification, and increase the safety of AI systems, by implementing the ability to learn from real-life environments, not only in training, make the decisions that are most certain using the information it has gained, and learn from the behavior that humans want it to have.
I used and modified part of the [AIRD code](https://github.com/SoerenMind/Inverse_Reward_Design). The GitHub repository for the code of my project is: [RBAIRD](https://github.com/pliam1105/RBAIRD).
Terminology/Setup
=================
The environment I used is a *gridworld*, which is a grid with dimensions 12x12, and it contains:
* A *robot*, which can move up, down, right, and left in adjacent cells.
* A *start* state, from which the robot starts moving.
* Some *goal* states, which when the robot reaches it stops moving.
* Some *walls*, from which the robot cannot pass through.
* All the other cells, in which the robot moves.

All the cells contain a vector of *features* (f1,f2, …,fn), which are used in calculating the reward in that state.
The *reward* is calculated using a *reward function*, which is a vector of *weights* (w1,w2, …,wn), which is the same along all states.
The reward in a state with features f = (f1,f2, …,fn) and weights w = (w1,w2, …,wn) is their dot product f · w = (f1\*w1+ f2\*w2+ … + fn\*wn). We also have a *living reward*, that is used to incentivize shorter routes, so we subtract it from the dot product.
A *policy* is a map from the *states* (x, y) to the *action* (north, south, east, west) in the environment. An *agent* controls the robot and moves it in specific directions, using a predetermined policy, in order to maximize the total reward in a trajectory of the robot (the *trajectory* is the set of states the robot has visited in chronological order until we stopped it or it reached a goal)
In both papers and my project, we try to find the reward function that best represents the intended behavior of the agent, which we call the *true reward function*. This function is an element of a big set that is called the *true reward space*, which contains all the possible true reward functions.
However, because we are unsure of that perfect reward function, in IRD we start with a human-made estimation which is a *proxy reward function*, which is an element of the *proxy reward space* (in AIRD we only have the proxy reward space).
The goal of the papers and the project is to find a probability distribution over all the rewards in true reward space: for each element of it, we have the probability that it is the true reward function, based on the behavior they incentivize in the training environment.

The *feature expectations*, given a reward function and an environment, is the expected sum of the features in a trajectory derived from an optimal policy given that reward function.
In both the total trajectory reward and feature expectations, we apply a discount γ (it may be 1), such that the next feature or reward is first multiplied by γi, where i increases by 1 each time the robot moves.
Utilized existing approaches
============================
Inverse Reward Design [(Hadfield-Menell et al., 2020)](https://arxiv.org/abs/1711.02827)
----------------------------------------------------------------------------------------
Given the true reward space and a proxy reward function, it approximately computes (using Bayesian inference) the probability distribution over the true reward function.

It then computes a risk-averse policy, that takes actions so that the distribution of the rewards in that state, using a set of weights sampled with the precomputed probabilities, and the features of that state, has low variance (the reward function distribution is very certain about that state). The risk-averse policy is computed in various ways:
* Maximizing the worst-case reward, per state or trajectory.
* Comparing the reward of each state with the reward of some baseline features used as a reference point.

Active Inverse Reward Design [(Mindermann et al., 2019)](https://arxiv.org/abs/1809.03060)
------------------------------------------------------------------------------------------
It is given the true reward space, and a proxy reward space with some proxy reward functions (they may be the same set). It starts with setting the wanted probability distribution (for the true reward function) as a uniform distribution (all the functions are equally probable since we don’t know anything about the true reward function).
Then, it continuously asks queries to the human, in order to update that probability distribution and make it more certain about the true reward. A *query* is defined as a small subset of the proxy reward space. The answer to the query is a single element of that subset, which the human believes incentivizes the best behavior, compared to the other elements of the query (it compares suboptimal behaviors, not the optimal one).

After each query, it uses Bayesian inference to update the probability distribution based on the answer to that query. To do that, it uses a [Q learning](https://www.datacamp.com/tutorial/introduction-q-learning-beginner-tutorial) planner that optimizes trajectories, in the training environment, given each element of the query as the reward function. It then computes the feature expectations of these trajectories and uses these and the answer to the query to update the probabilities.
The queries are chosen such that the expected information gain from the answer to the query is maximal. The information gain is measured using various metrics, one of which is the entropy of the probability distribution over the true reward function.
There are multiple ways this selection can be done, but the one I used on my project, as it is more efficient and less time-consuming, is the following: as long as the query size is less than a predetermined constant (initially it is empty), we take a random vector of weights, and then take gradient descent steps such that the information gain when these weights are the answer to a query is maximal.
After each query, the performance is evaluated by running the Q learning planner, using the true reward function and the average of the probability distribution computed, and measuring their difference in the training environment and some test environments.

My approach
===========
We start with the same given data as AIRD: the true reward space and the proxy reward space.
I also define a query, and update the probabilities using Bayesian inference, the way AIRD does it (I used the AIRD code for query selection and Bayesian inferences).
There are some batches, with a specific number of environments in each batch. There is a constant (big) set of test environments for evaluation of the performance. I also keep track of the probability distribution over the true reward function, which initially is a uniform distribution.
I also made two planners, using Q learning, that have as an input a set of weights:
* The non-risk-averse (unsafe) one, which has as the reward the average of the rewards on the state with each weight sample.
* The risk-averse one, which penalizes the variance of the rewards computed using the weight sample and the state’s features, in two ways:
+ By taking the worst-case reward
+ By subtracting the variance multiplied by a coefficient
For each batch, I do the following:
1. Repeat (for a constant number of iterations) the query process of AIRD: I find the query from which the information gain of the probability distribution is maximal. The difference with the previous query process is that for each reward function in the query, for each environment in the batch:
1. I take the initial probability distribution.
2. I answer the query to that specific environment.
3. I apply the reward function to the unsafe planner and get the feature expectations.
4. Use the feature expectations to update the probability distribution.
5. Update the initial probability distribution using this inference and move on to the next environment in the batch.
2. Update the initial probability distribution, so it will be transferred to the next batch.
3. Sample a set of weights from the distribution over the reward function, and compute a risk-averse policy using the respective planner.
Evaluation
==========
After each batch, for each test environment, I computed the total reward of the risk-averse planner and that of the unsafe one. I also computed the optimal reward by giving the planner the exact true reward function.
Then, I computed the following metrics (the “-” sign means difference of total reward):
* Test regret = optimal planner – unsafe planner
* Risk-averse regret = optimal planner – risk-averse planner
* Test and risk-averse variance = sum of variances of the rewards in trajectory computed using the unsafe and risk-averse planner
I then took the average of the above metrics over the test environments.
I also plotted the trajectories of both planners in each environment of the batch.
The X-axis of the graphs is the number of total Bayesian inferences (updates of the reward probabilities), which is the number of batches\*number of queries for each batch\*number of environments in each batch.
I performed experiments, by varying:
* Number of batches
* Number of environments in each batch
* Number of queries for each batch
* The method used for risk-averse planning:
+ Subtracting the variance with coefficient 1
+ Subtracting the variance with coefficient 100
+ Worst-case with 10 reward samples
+ Worst-case with 100 samples
I also collected data on the AIRD paper’s method, for comparison.

Results
=======
Initial AIRD
------------

In the initial AIRD paper, using the same query-choosing method as I did, the performance approaches optimal after ~50 queries (but it never becomes optimal, a single environment isn’t enough to understand the true reward function to its fullest)
RBAIRD performance
------------------
| | |
| --- | --- |
| | |
Using RBAIRD, the total number of inferences is lower than AIRD (~30), and it almost achieves optimal performance. This shows that there is much bigger information gain when combining behavior from many environments instead of many queries on just one environment (each environment can highlight different aspects of the policy of a specific reward function)
The number of queries can be even lower with big batches (~1-2 queries if we have 10 environments in each batch), so less human intervention is needed (even with answering the query to each environment, we have at most 15-20 answers needed, but gain perfect performance)
Risk-averse vs unsafe performance
---------------------------------
| | |
| --- | --- |
| | |
When we are still uncertain about the true reward function, risk-averse performance is worse than the unsafe one. However, the risk-averse planner has a constantly lower variance than the non-risk-averse one. Both performances become optimal at the same time, when the true reward function is found.
Trajectory Comparison
---------------------

The risk-averse planner chooses a trajectory that is safer than the non-risk-averse one, while still reaching a goal and collecting as big of a reward as possible.
Here is a comparison of different risk-averse methods in the same environment:
| | |
| --- | --- |
| Subtracting with coefficient 100 | Worst-case with 100 samples |
Adapting to new environments
----------------------------
I performed an experiment where, instead of having all the features available in the training environments from start, I added a number of them, that were 0 in all the environments of the previous batches, to each new batch.
| | |
| --- | --- |
| | |
The total number of inferences was still ~40, similar to when we had all the features available from the beginning, and lower than AIRD, and with even only 2 queries per batch. This shows that RBAIRD is able to adapt to unforeseen scenarios quickly. Also, the risk-averse planner had about half the variance of the unsafe one, noting its importance in new environments and the safety it offers on them. AIRD didn’t have the capability to adapt to unknown environments, since it was only trained in one environment, and it ignored the possibility of new features appearing, often making risky decisions.
Comparing risk-averse methods
-----------------------------
Here I plot the risk-averse regret and variance using different reward methods (subtracting with low coefficient and high coefficient means subtracting the variance with coefficients 1 and 100 respectively):
| | |
| --- | --- |
| | |
It seems subtracting with coefficient 1 is the most efficient method, both regarding the regret and the variance (this is without comparison to other more sophisticated methods).
Theory of change
================

Project goals:
* More certain actions:
+ The risk-averse planner highly values the variance of the rewards in each state, and the experiments showed that its actions are more certain than those of an unsafe planner.
* Less human intervention:
+ The number of answers needed from the human is lower than the one of AIRD, and the number of queries even lower (but they need to be answered for each environment). Having the same query for all environments can help when a reward function clearly incentivizes a certain behavior that is independent of the environment.
* Reaches optimal performance fast:
+ After a small number of queries and answers, the performance is almost the same as that of an unsafe planner that knows the true reward function.
* Continuously learns from new data, doesn’t misgeneralize:
+ The experiment where I added new features in each batch shows that RBAIRD was able to adapt to these new environments and learn the true reward function in almost the same number of queries, while making choices that avoided these new features in the beginning (less variance)
* Learns the intended behavior:
+ The fact that the human chooses from a set of suboptimal reward functions which one is better and the process learns from these answers, shows that it doesn’t need a human-made faulty reward function, but learns the low-level patterns of behavior that each suboptimal reward function incentivizes. Then it constructs a reward function that incorporates all these behaviors.
Immediate outcomes:
* Lower chance of unexpected actions:
+ Since my approach values the uncertainty of its actions, it prefers taking actions that it knows from previous environments are good, so they will not be unexpected.
* Smaller negative incentives for businesses:
+ Since RBAIRD needs fewer answers from humans and its performance becomes almost optimal after a short time, the cost of aligning the system is smaller, while its alignment is better, as it adapts to new environments and makes safe actions.
* Adaptation in real-life scenarios:
+ My approach continuously learns from new environments and unforeseen features in a short time. Thus, it is able to learn how to deal with the new aspects of the environment safely and adapt to real-life scenarios.
* It approaches the behavior humans intend it to:
+ The planner becomes almost certain, has the same trajectory as the true reward function in most environments (their rewards are the same), and that function has the humans’ intended behavior. So, the system’s behavior approaches the intended one.
Long-term outcomes:
* Lower risk of turning against humans/seeking power:
+ As businesses have smaller negative incentives to adopt that model, it is easier to persuade people to adopt this approach for real-life, potentially dangerous systems. These systems will approach the intended, from the humans, behavior and this will lower the risk of them turning against humans or seeking power, as these actions will be penalized in the true reward function.
* Reduces existential threat:
+ If there is a lower risk of an AI turning against humans, they will be able to have more control over it. This reduces the immediate risk of human extinction due to the AI causing great harm, and the negative influences the AI can have in society in general, which can indirectly be an existential threat.
Next steps
==========
Improving efficiency/usability
------------------------------
### Other query selection methods
In this project, I only used one query selection method from those that are used in the AIRD paper, the one where the query is increased by size one each time, after randomly sampling some weights and optimizing them. I also only used queries of size 5.
However, other query methods are more efficient in the original paper, but more computationally expensive, so I wasn’t able to run them on my PC. I will try to optimize them and integrate them into RBAIRD (my approach), in order to compare their efficiency and performance, and maybe achieve better results.
### More efficient risk-averse planning method
Until now, I have only tried using some simple, per-state, risk-averse reward functions, that simply take the worst-case scenario or penalize the variance in some way. However, they lead to the so-called *blindness to success*, and there are more efficient, but more complicated, methods that improve that aspect and gain performance-wise, and possibly reduce the expected variance even more. I will try to implement these, and evaluate their performance, regarding the maximum reward and the uncertainty of the actions.
### Answering each query in multiple environments at once
What the code currently does is it makes a single query in each iteration, that is optimized along all environments, but then gets an answer for each environment (total number of answers = number of environments in each batch, but we have the same query in each environment). This answer depicts which function performs better in that specific environment, relative to the true reward function.
What would be more efficient, is to answer each query once for all environments, showing which reward function shows better behavior in general, not specific performance to that environment. This would greatly reduce the number of query answers needed, and maybe make it less computationally expensive. However, then the human has a more difficult task to do, and the selection is a bit ambiguous, since there is no clear measure of “better behavior” about a reward function, since each one incentivizes different behaviors to a different extent.
### Making an interactive query-answer environment
Currently, the program does not actively ask for a human to answer each query, but it is given the true reward function and predicts the expected answer based on that. Something that would demonstrate how a human would be able to judge and compare different reward functions, is to make an interactive environment where the human can answer the query, given various types of information.
It could show the trajectories that are computed based on each reward function, in a single or multiple environments (maybe choose the environments where their behavior would differ a lot, or where a single feature is prevalent in each environment). Also, it could provide various metrics about the performance of the reward function regarding various features, or some other high-level patterns that are observed, but this is a bit ambiguous and related to mechanistic interpretability. It could also provide the risk-averse and the unsafe performance of the planner in various steps of the query process, in order for the human to be able to judge what behavior the agent adopts (as a safety measure)
Long-term actions
-----------------
* Write a research paper on the approach, describing and evaluating the process and the above changes, changing various parameters, and demonstrating its capability to improve various aspects of AI Alignment.
* Work on adapting it to real-life systems, e.g.:
+ Household cleaning robots, to take the trash out without causing damage when encountering a new object or a new house (unknown environments)
+ Autonomous farming tractors, in order to avoid unknown surfaces, destroying new plants, or getting stuck in a certain place due to difficult conditions.
* Follow a major related to AI, specialize in that sector, and work on improving AI alignment and specifically applying the knowledge I gained from the project to other subproblems of it.
Related work/References
=======================
Barnes, B. (n.d.). *Imitative generalisation(Aka ’learning the prior’)*. Retrieved August 27, 2023, from https://www.alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1
Casper, S., Davies, X., Shi, C., Gilbert, T. K., Scheurer, J., Rando, J., Freedman, R., Korbak, T., Lindner, D., Freire, P., Wang, T., Marks, S., Segerie, C.-R., Carroll, M., Peng, A., Christoffersen, P., Damani, M., Slocum, S., Anwar, U., … Hadfield-Menell, D. (2023). *Open problems and fundamental limitations of reinforcement learning from human feedback*. arXiv. https://doi.org/10.48550/arXiv.2307.15217
Hadfield-Menell, D., Dragan, A., Abbeel, P., & Russell, S. (2016). *Cooperative inverse reinforcement learning*. arXiv. https://doi.org/10.48550/arXiv.1606.03137
Hadfield-Menell, D., Milli, S., Abbeel, P., Russell, S., & Dragan, A. (2020). *Inverse reward design*. arXiv. https://doi.org/10.48550/arXiv.1711.02827
Langosco, L., Koch, J., Sharkey, L., Pfau, J., Orseau, L., & Krueger, D. (2023). *Goal misgeneralization in deep reinforcement learning*. arXiv. https://doi.org/10.48550/arXiv.2105.14111
Mindermann, S., Shah, R., Gleave, A., & Hadfield-Menell, D. (2019). *Active inverse reward design*. arXiv. https://doi.org/10.48550/arXiv.1809.03060
*Preventing an AI-related catastrophe—Problem profile*. (n.d.). 80,000 Hours. Retrieved August 27, 2023, from https://80000hours.org/problem-profiles/artificial-intelligence/
Research, D. S. (2018, September 27). Building safe artificial intelligence: Specification, robustness, and assurance. *Medium*. https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1
Research, D. S. (2019, October 10). Designing agent incentives to avoid side effects. *Medium*. https://deepmindsafetyresearch.medium.com/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107
Shah, R., Krasheninnikov, D., Alexander, J., Abbeel, P., & Dragan, A. (2018, September 27). *Preferences implicit in the state of the world*. International Conference on Learning Representations. https://openreview.net/forum?id=rkevMnRqYQ¬eId=r1eINIUbe4
*Specification gaming: The flip side of AI ingenuity*. (n.d.). Retrieved August 22, 2023, from https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity
Vika. (n.d.). *Paradigms of AI alignment: Components and enablers*. Retrieved August 27, 2023, from https://www.lesswrong.com/posts/JC7aJZjt2WvxxffGz/paradigms-of-ai-alignment-components-and-enablers |
ba779c61-392f-4b35-b575-dbcc19fdfc44 | StampyAI/alignment-research-dataset/blogs | Blogs | ML Safety Newsletter #3
Welcome to the 3rd issue of the ML Safety Newsletter. In this edition, we cover:
* NeurIPS ML safety papers
* experiments showing that Transformers have no edge for adversarial robustness and anomaly detection
* a new method leveraging fractals to improve various reliability metrics
* a preference learning benchmark
* ... and much more.
---
**Robustness**
==============
### **Are Transformers More Robust Than CNNs?**
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F918d92aa-65a5-43c0-b58c-897007a8bfa5_2908x929.png)This paper evaluates the distribution shift robustness and adversarial robustness of ConvNets and Vision Transformers (ViTs). Compared with previous papers, its evaluations are more fair and careful.
After controlling for data augmentation, they find that Transformers exhibit greater distribution shift robustness. For adversarial robustness, findings are more nuanced. First, ViTs are far more difficult to adversarially train. When successfully adversarially trained, ViTs are more robust than off-the-shelf ConvNets. However, ViTs’ higher adversarial robustness is explained by their smooth activation function, the GELU. If ConvNets use GELUs, they obtain similar adversarial robustness. Consequently, Vision Transformers are more robust than ConvNets to distribution shift, but they are not intrinsically more adversarially robust.
**[Paper](https://arxiv.org/pdf/2111.05464.pdf)**
**[Video](https://slideslive.de/38967094/are-transformers-more-robust-than-cnns)**
### **Fractals Improve Robustness (+ Other Reliability Metrics)**
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7a1ef333-54f1-40b2-926f-3125c262e02d_1600x847.png)*PixMix improves both robustness (corruptions, adversaries, prediction consistency) and uncertainty estimation (calibration, anomaly detection).*
PixMix is a data augmentation strategy that mixes training examples with fractals or feature visualizations; models then learn to classify these augmented examples. Whereas previous methods sacrifice performance on some reliability axes for improvements on others, this is the first to have no major reliability tradeoffs and is near Pareto-optimal.
**[Paper](https://arxiv.org/pdf/2112.05135.pdf)**
#### Other Recent Robustness Papers
[A new adversarial robustness state-of-the-art by finding a better way to leverage data augmentations.](https://arxiv.org/pdf/2111.05328.pdf)
[A highly effective gradient-based adversarial attack for text-based models.](https://arxiv.org/pdf/2104.13733.pdf)
[A new benchmark for detecting adversarial text attacks.](https://arxiv.org/pdf/2201.08555.pdf)
Adversarially attacking language models with [bidirectional](https://arxiv.org/pdf/2004.09984.pdf) and [large-scale unidirectional language models](https://arxiv.org/pdf/2202.03286.pdf).
First works on certified robustness under distribution shift: [[1]](https://arxiv.org/pdf/2112.00659.pdf), [[2]](https://arxiv.org/pdf/2201.12440.pdf), [[3]](https://arxiv.org/pdf/2202.01679.pdf).
[A dataset where in-distribution accuracy is negatively correlated with out-of-distribution robustness.](https://arxiv.org/pdf/2202.09931.pdf)
[Improving performance in tail events by augmenting prediction pipelines with retrieval.](https://arxiv.org/pdf/2202.11233.pdf)
[A set of new, more realistic 3D common corruptions.](https://3dcommoncorruptions.epfl.ch)
[Multimodality can dramatically improve robustness.](https://arxiv.org/pdf/2201.01763.pdf)
---
**Monitoring**
==============
### **Synthesizing Outlier for Out-of-Distribution Detection**
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fbedffc00-79ae-4014-b0a0-36c593bd6aca_1978x523.png)*The authors model the hidden feature representations of in-distribution examples as class-conditional Gaussians, and they sample virtual outliers from the low-likelihood region. The model is trained to separate in-distribution examples from virtual outliers.*
A path towards better out-of-distribution (OOD) detection is through generating diverse and unusual examples. As a step in that direction, this paper proposes to generate hidden representations or “virtual” examples that are outliers, rather than generate raw inputs that are outliers. The method is evaluated on many object detection and classification tasks, and it works well. It is not evaluated on the more difficult setting where anomalies are held-out classes from similar data generating processes. If the authors evaluated their CIFAR-10 model’s ability to detect CIFAR-100 anomalies, then we would have more of a sense of its ability to detect more than just far-from-distribution examples. Assuming no access to extra real outlier data, this method appears to be the state-of-the-art for far-from-distribution anomaly detection.
**[Paper](https://arxiv.org/pdf/2202.01197.pdf)**
### **Studying Malicious, Secret Turns through Trojans**
ML models can be “Trojans” and have hidden, controllable vulnerabilities. Trojan models behave correctly and benignly in almost all scenarios, but in particular circumstances (when a “trigger” is satisfied), they behave incorrectly. This paper demonstrates the simplicity of creating Trojan reinforcement learning agents that can be triggered to execute a secret, coherent, and undesirable procedure. They modify a small fraction of training observations without assuming any control over policy or reward. Future safety work could try to detect whether models are Trojans, detect whether a Trojan model is being triggered, or precisely reconstruct the trigger given the model.
**[Paper](https://arxiv.org/pdf/2201.00762.pdf)**
### **New OOD Detection Dataset**
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F738c51f0-c2c3-4015-89fe-55a8bb8986cc_1600x759.png)*The Species dataset contains over 700,000 images covering over 1,000 anomalous species.*
While previous papers claimed that Transformers are better at OOD detection than ConvNets, it turns out their test-time “anomalous examples” were similar to examples seen during pretraining. How can we properly assess OOD detection performance for models pretrained on broad datasets? This paper creates a biological anomaly dataset with organisms not seen in broad datasets including ImageNet-22K. The OOD dataset shows that Transformers have no marked edge over ConvNets at OOD detection, and there is substantial room for improvement.
**[Paper](https://arxiv.org/pdf/1911.11132.pdf#page=3)**
#### Other Recent Monitoring Papers
[Detecting far-from-distribution examples by simply first clipping values in the penultimate layer.](https://arxiv.org/pdf/2111.12797.pdf)
[A new OOD detection dataset with 224K classes.](https://openreview.net/pdf?id=fnuAjFL7MXy)
[A new metric advances the state-of-the-art for predicting a model’s performance on out-of-distribution data, assuming no access to ground truth labels.](https://arxiv.org/pdf/2202.05834.pdf)
[A differentiable calibration loss sacrifices a small amount of accuracy for large calibration improvements.](https://openreview.net/pdf?id=-tVD13hOsQ3)
[In a thorough analysis of calibration, ConvNets are less calibrated than Transformers and MLP models, and more pretraining data has no consistent effect on calibration.](https://arxiv.org/pdf/2106.07998.pdf)
[A dataset that can be used for detecting contradictions given long background contexts. Such a detector could be used for preventing models from stating falsehoods at odds with reality or their previous statements.](https://arxiv.org/pdf/2011.04864.pdf)
[Factual knowledge in language models corresponds to a localized computation that can be directly edited.](https://arxiv.org/pdf/2202.05262.pdf)
---
**Alignment**
=============
### **A Benchmark for Preference Learning**
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F0d9972b3-97c1-4949-83cb-e3b23909a462_1972x500.png)*Instead of assuming that the environment provides a (hand-engineered) reward, a teacher provides preferences between the agent’s behaviors, and the agent uses this feedback to learn the desired behavior.*
Preference-based RL is a framework for teaching agents by providing preferences about their behavior. However, the research area lacks a commonly adopted benchmark. While access to human preferences would be ideal, this makes evaluation far more costly and slower, and it often requires navigating review board bureaucracies. This paper creates a standardized benchmark using simulated teachers. These simulated teachers have preferences, but they can exhibit various irrationalities. Some teachers skip queries, some exhibit no preference when demonstrations are only subtly different, some make random mistakes, and some overemphasize behavior at the end of the demonstration.
**[Paper](https://arxiv.org/pdf/2111.03026.pdf)**
#### Other Recent Alignment News
[It is sometimes easier to identify preferences when decision problems are more uncertain.](https://arxiv.org/pdf/2106.10394.pdf)
Debate about “alignment” definitions: [[1]](https://twitter.com/scottniekum/status/1488222268056653830), [[2]](https://twitter.com/mark_riedl/status/1488331425267040259), [[3]](https://arxiv.org/pdf/2203.02155.pdf#page=18).
[Today's optimal policies tend to be power-seeking, a failure mode that will become more concerning with future advanced AI.](https://arxiv.org/pdf/1912.01683.pdf)
[Using model look-ahead to avoid safety constraint violations.](https://proceedings.neurips.cc/paper/2021/file/73b277c11266681122132d024f53a75b-Paper.pdf)
[This work proposes a policy editor to make policies comply with safety constraints; experiments are based on Safety Gym.](https://arxiv.org/pdf/2201.12427.pdf)
[Benchmarking policies that adhere to constraints specified via natural language.](https://arxiv.org/pdf/2010.05150.pdf)
#### Other News
[Apply](https://docs.google.com/document/d/1TTZ-hr4QYWnILVL418Bac_O68fU7yk3czQ5l5i3yrZM/edit?usp=sharing) to [Fathom Radiant](https://fathomradiant.co) which is working on hardware for safe machine intelligence. |
c708a5a6-4811-44c2-aece-bf2731d1ed1a | trentmkelly/LessWrong-43k | LessWrong | Factoring P(doom) into a bayesian network
I wouldn't be surprised if this is already a thing so please let me know if it is. I have tried searching. I'd like a tool like this one for analyzing P(doom): https://projects.fivethirtyeight.com/2024-election-forecast/. Other precedents are this model of transformative AI timeline on less wrong and the use of bayesian networks for forecasting climate change.
The problem of estimating P(doom) is very complicated but it is not impossible. It is not metaphysical like estimating P(we live in a simulation) or P(we are a boltzman brain). P(doom) is a tangible thing based on human technology. Just very very complicated. It requires summing over many different possibilities. So maybe can we do better by factoring the distribution? This would break the problem down into parts which can each be analyzed one at a time.
Suppose there were market or expert based priors for the following probabilities.
* P(a single entity would lose control of the AI). This one component has been the main focus of safety research.
* P(the international community can prevent the development of ASI)
* P(the international community can restrict number of entities that have ASI)
* P(doom | ASI is easily available to all)
* P(number of entities that have ASI)
* P(a single rogue AI could destroy humanity despite good AI's trying to defend humanity)
* P(a single human group would use ASI to oppress the rest of the world)
One could build a bayesian network from those priors and estimate P(doom) using one of many methods (e.g. variational inference).
The list would have to go on much longer actually. And the model would be more complicated.
There are blog posts and papers analyzing each of the probabilities already. It is just a matter of putting all this accumulated theory together into a single model. On the On the other hand, one subtle fallacy could result in absurd results. There could be error compounding. The analysis could fail to include all possibilities. It |
9d9bbfca-8a9b-4522-b0a3-7bd5aecc478e | trentmkelly/LessWrong-43k | LessWrong | [LINK] TEDx "When creative machines overtake man"
Video here, illustrated transcript here.
Contains a good sketch of Schmidhuber's Formal Theory of Fun and Creativity.
One TEDx passage in particular stuck with me, quoted with context:
> In a few decades, such [creative] machines will have more computational power than human brains.
>
>
> This will have consequences. My kids were born around 2000. The insurance mathematicians say they are expected to see the year 2100, because they are girls.
>
>
> A substantial fraction of their lives will be spent in a world where the smartest things are not humans, but the artificial brains of an emerging robot civilization, which presumably will spread throughout the solar system and beyond (space is hostile to humans but nice to robots).
>
>
> This will change everything much more than, say, global warming, etc. But hardly any politician is aware of this development happening before our eyes. Like the water lilies which every day cover twice as much of the pond, but get noticed only a few days before the pond is full.
>
>
> My final advice: don't think of us, the humans, versus them, those future uber-robots. Instead view yourself, and humankind in general, as a stepping stone (not the last one) on the path of the universe towards more and more unfathomable complexity. Be content with that little role in the grand scheme of things. |
80889b63-bbdf-42ea-b9fa-1ba4b4be54bd | trentmkelly/LessWrong-43k | LessWrong | Apply to Effective Altruism Funds now
I expect EA Funds – and the Long-Term Future Fund in particular – to be of interest to people on LessWrong, so I'm crossposting my EA Forum post with the excerpts that seem most relevant:
Summary
* The Animal Welfare Fund, the Long-Term Future Fund, and the EA Infrastructure Fund (formerly the EA Meta Fund) are calling for applications.
* Applying is fast and easy – it typically takes less than a few hours. If you are unsure whether to apply, simply give it a try.
* The Long-Term Future Fund and EA Infrastructure Fund now support anonymized grants: if you prefer not having your name listed in the public payout report, we are still interested in funding you.
* If you have a project you think will improve the world, and it seems like a good fit for one of our funds, we encourage you to apply by 7 March (11:59pm PST). Apply here. We’d be excited to hear from you!
Recent updates
* The Long-Term Future Fund and EA Infrastructure Fund now officially support anonymized grants. To be transparent towards donors and the effective altruism community, we generally prefer to publish a report about your grant, with your name attached to it. But if you prefer we do not disclose any of your personal information, you can now choose one of the following options: 1) Requesting that the public grant report be anonymized. In this case, we will consider your request, but in some cases, we may end up asking you to choose between a public grant or none at all. 2) Requesting we do not publish a public grant report of any kind. In this case, if we think the grant is above our threshold for funding, we will refer it to private funders.
(…)
Long-Term Future Fund
The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization, primarily via making grants that contribute to the mitigation of global catastrophic risks. Historically, we’ve funded a variety of longtermist projects, including:
* Scholarships, academic teaching buy-outs, and additional fu |
62df5218-ee99-4012-90ba-133eb56e60f9 | trentmkelly/LessWrong-43k | LessWrong | Is It Morally Wrong To Enforce A Surrogate Motherhood Contract?
‘A Surrogate Motherhood Contract’ – a legal agreement whereby a woman consents to become pregnant, conduct her pregnancy accordingly, give birth to a child or children - all of this for another person or persons, who are or will ultimately become the parent(s) of the newborn child or children.
Both the legal and moral status of surrogacy differs widely from one country to another.[1] For instance, a surrogate motherhood contract is perfectly legal in India; German law considers such contracts clearly void[2]; in the UK only altruistic surrogacy is allowed[3]; and in Romania it is uncertain what the law says in respect of such legal agreements[4]. This clearly conveys that this topic is open to debate.
In case the surrogate mother gives birth to a child but after doing so refuses to offer it to the party that legally deserves it, she thus breaches the contract. Is it morally wrong to allow the ‘parents’ to enforce the contract and thus claim the child 'back'? The traditional stance is that the child shall absolutely stay with his biological mother and a mere fictional agreement cannot render the mother unable to claim it. German law helps us put it differently and more concretely: 'the mother of a child is the woman who gave birth to it'.[5]
Crucially, the idea that the intended parents should be able to enforce the contract and claim 'back' the baby from the woman who gave birth is usually regarded as morally wrong. But is it?
> To try to figure out whether a certain action is morally right or wrong, one must not take moral convictions at face value – but seek to deploy thinking. By thinking, I mean endeavouring to recognize, strip off or question even your most deeply-rooted values; striving to entertain your thoughts unorthodoxly, whilst not necessarily seeking to come up with unorthodox views. Thinking also involves contrarian reasoning: one must try to argue against the majority view, regardless of how unpopular it may be.
I am going to contrast the tradi |
08aa9764-a4f4-422c-a091-3e7ee2cc10b0 | trentmkelly/LessWrong-43k | LessWrong | Mike Darwin on Steve Jobs's hypocritical stance towards death
First, Darwin describes Jobs's (far mode) stance towards death:
> As Aschwin points out Jobs is on record (his Stanford Commencement Speech) as saying that death is the best thing that ever happened to life - that it clears out the old, and makes way for the new.
But these are Jobs's actual (near mode) actions regarding his own death:
> The really big story, so far largely unexploited by the media, is that Jobs got a liver transplant and got it here in the US. This just does not happen in patients with his Dx and prognosis - not since Mickey Mantle, anyway. And his outcome was exactly as was predicted. This infuriates those 'in the know' in the transplant community, because you have only to look to guys like Jim Neighbors, Larry Hagman, or even Larry Kramer who got livers many years or even a decade or two ago, and who continue not only to survive, but to do well. To put the liver of a 25-year old into a ~54 year old man with metastatic neuroendocrine pancreatic cancer violates the established protocols of just about every transplant center in the US.
The conclusion:
> I find it more than a little hypocritical that Jobs, who spoke so glowingly of the utility of death for others, used every bit of medical technology AND his considerable wealth and influence, to postpone it for it himself, including the expedient of taking a GIFT, given with the sole intention of its being used to provide genuinely life saving benefit (not a futile exercise in medical care) and squandering it on a doomed attempt to save his own life. If you have the temerity to stand before the entire population of this planet and proclaim the goodness of death, then you should have the balls to accept it - especially when your own warped, erroneous and IRRATIONAL decision making was the proximate cause of your own dying. Instead, Jobs chose to grasp at straws, take a gift from a dead man and his family, given in good faith, and squander it on his own lust for more of the very thing (life) that |
30fe7df8-7b53-4e2e-983b-03682af91e45 | trentmkelly/LessWrong-43k | LessWrong | Link Retrospective for 2020 Q1
Below are a list of interesting content I came across in the first quarter of 2020. I subdivided the links into the type of medium they use, namely auditory, textual, and visual. The links are in no particular order.
1. Auditory
D. McRaney. How a Divisive Photograph of a Perceptually Ambiguous Dress Led Two Researchers to Build the Nuclear Bomb of Cognitive Science out of Socks and Crocs. You Are Not So Smart. 2020:
> ...the science behind The Dress, why some people see it as black and blue, and others see it as white and gold. But it’s also about how the scientific investigation of The Dress lead to the scientific investigation of socks and Crocs, and how the scientific investigation of socks and Crocs may be, as one researcher told me, the nuclear bomb of cognitive neuroscience.
> When facing a novel and uncertain situation, the brain secretly disambiguates the ambiguous without letting you know it was ever uncertain in the first place, leading people who disambiguate differently to seem iNsAnE.
C. Connor. Psychoacoustics. YouTube. 2020:
> 00:00 Psychoacoustics is the study of the perception of sound. These videos attempt to gather all of the various interesting phenomena that fall in to this category in one condensed series, including many neat illusions. We will also cover a few fascinating geeky topics relating to hearing.
MIT, 15.ai, fifteen.ai, 2020:
> This is a text-to-speech tool that you can use to generate 44.1 kHz voices of various characters. The voices are generated in real time using multiple audio synthesis algorithms and customized deep neural networks trained on very little available data (between 30 15 and 120 minutes of clean dialogue for each character). This project demonstrates a significant reduction in the amount of audio required to realistically clone voices while retaining their affective prosodies.
2. Textual
AI Impacts. Interviews on Plausibility of AI Safety by Default. AI Impacts Blog. 2020:
> AI Impacts conducted interview |
88adb992-cec1-4121-97e1-3feb2a32dc0f | trentmkelly/LessWrong-43k | LessWrong | Domain Theory and the Prisoner's Dilemma: FairBot
Wishes it were crossposted from the AI Alignment Forum. Contains more technical jargon than usual.
Recall Robust Cooperation in the Prisoner's Dilemma and a hint of domain theory.
In the Prisoner's Dilemma, players have the opportunity to harm the opponent for minor gain. To aid decision, players may be granted some knowledge about the matchup. In decreasing order of fidelity:
1. Both player's source codes/both player's behavior in all possible environments. No strategy can be better against every opponent than any distinguishable strategy, since some opponent will punish one for not being the other.
2. Both player's behavior in the current matchup. PrudentBot, who cooperates iff he knows both players will act alike[1], operates here.
3. The opponent's behavior in the current matchup. FairBot, who cooperates iff he knows the opponent will cooperate, operates here.
4. Nothing. Classical game theory says that, of the few possible strategies, defection is optimal.
This post focuses on the third case.
In domain theory, we partially order our sets by "information content". Suppose every player can cooperate, defect, or fail to have decided. The latter case helps to model algorithmic nontermination. Both cooperation and defection would have more information content than indecision.
Decision := {Cooperate, Defect, ⊥}
Query := {"other cooperates", "both act alike", ...}
Answer := {"I know this.", "I don't know this."}
Knowledge := Query -> Answer
Player := Knowledge -> Decision
Answer is ordered by knowing something having more content than not knowing it. I will order Query by implication, because then, the domain-theoretic custom of considering only monotonic functions means that Knowledge must respect implication!
Possible queries involving only the opponent.Possible states of knowledge about the opponent's decision. Those marked with an eye are logically omniscient - they have drawn all possible inferences. The green boundary separates where FairBot cooperat |
a9718087-89ee-467f-97e2-a78c4d34eeab | trentmkelly/LessWrong-43k | LessWrong | Crush Your Uncertainty
Bayesian epistemology and decision theory provide a rigorous foundation for dealing with mixed or ambiguous evidence, uncertainty, and risky decisions. You can't always get the epistemic conditions that classical techniques like logic or maximum liklihood require, so this is seriously valuable. However, having internalized this new set of tools, it is easy to fall into the bad habit of failing to avoid situations where it is necessary to use them.
When I first saw the light of an epistemology based on probability theory, I tried to convince my father that the Bayesian answer to problems involving an unknown processes (eg. laplace's rule of succession), was superior to the classical (eg. maximum likelihood) answer. He resisted, with the following argument:
* The maximum likelihood estimator plus some measure of significance is easier to compute.
* In the limit of lots of evidence, this agrees with Bayesian methods.
* When you don't have enough evidence for statistical significance, the correct course of action is to collect more evidence, not to take action based on your current knowledge.
I added conditions (eg. what if there is no more evidence and you have to make a decision now?) until he grudgingly stopped fighting the hypothetical and agreed that the Bayesian framework was superior in some situations (months later, mind you).
I now realize that he was right to fight that hypothetical, and he was right that you should prefer classical max likelihood plus significance in most situations. But of course I had to learn this the hard way.
It is not always, or even often, possible to get overwhelming evidence. Sometimes you only have visibility into one part of a system. Sometimes further tests are expensive, and you need to decide now. Sometimes the decision is clear even without further information. The advanced methods can get you through such situations, so it's critical to know them, but that doesn't mean you can laugh in the face of uncertainty in genera |
19a3bf5f-b154-4a02-9c60-172e61f6f7f1 | trentmkelly/LessWrong-43k | LessWrong | Are you being p-hacked? Time to hack back.
|
f46d709b-372b-4b1d-9948-2056d04c608c | trentmkelly/LessWrong-43k | LessWrong | Rationalist subreddit
I'm experimenting with a new rationalist subreddit on reddit.com
/r/rationalisthmus
Starting with links populated from the planet rationalist data set - to which I've added about 50 new sources in the past few weeks. |
7a4ec5f2-b9e0-49fc-8e7e-0215c3769262 | trentmkelly/LessWrong-43k | LessWrong | Adult Neurogenesis – A Pointed Review
[I am not a neuroscientist and apologize in advance for any errors in this article.]
Hey, let’s review the literature on adult neurogenesis! This’ll be really fun, promise.
Gage’s Neurogenesis In The Adult Brain, published in the Journal Of Neuroscience and cited 834 times, begins:
> A milestone is marked in our understanding of the brain with the recent acceptance, contrary to early dogma, that the adult nervous system can generate new neurons. One could wonder how this dogma originally came about, particularly because all organisms have some cells that continue to divide, adding to the size of the organism and repairing damage. All mammals have replicating cells in many organs and in some cases, notably the blood, skin, and gut, stem cells have been shown to exist throughout life, contributing to rapid cell replacement. Furthermore, insects, fish, and amphibia can replicate neural cells throughout life. An exception to this rule of self-repair and continued growth was thought to be the mammalian brain and spinal cord. In fact, because we knew that microglia, astrocytes, and oligodendrocytes all normally divide in the adult and respond to injury by dividing, it was only neurons that were considered to be refractory to replication. Now we know that this long accepted limitation is not completely true
Subsequent investigation has found adult neurogenesis in all sorts of brain regions. Wikipedia notes that “In humans, new neurons are continually born throughout adulthood in two regions of the brain: the subgranular zone and the striatum”, but adds that “some authors (particularly Elizabeth Gould) have suggested that adult neurogenesis may also occur in regions within the brain not generally associated with neurogenesis including the neocortex”, and there’s also some research pointing to the cerebellum.
Some research has looked at the exact mechanism by which neurogenesis takes place; for example, in a paper in Nature cited 1581 times, Song et al determine that as |
9814e35d-db3d-4bc3-bfab-95ad3acf58c5 | trentmkelly/LessWrong-43k | LessWrong | Maybe you can learn exotic experiences via analytical thought
The contents of this post:
* I argue that you can create exotic experiences at will via mere analytical thought. I think you can easily enough learn the weirdest types of synesthesia. Without any substances.
* This post touches on the binding problem and (a little bit) on the solvability of the hard problem of consciousness.
* This post describes unusual possible properties of experience. You're free to disagree with those properties, but they are new ideas.
* I try to give examples of my idea.
My arguments are mostly based on my models. The arguments repeat the same idea 9 times from 9 different angles.
1. Experience is prediction
A simple argument that experiences can be learned:
1. Each human thought carries at least some experience.
2. A new type of thoughts = a new type of experience.
3. "Predictions" are a very sharp and repeatable and verifiable type of thoughts.
4. Learning a new type of prediction = learning a new type of sharp, repeatable and verifiable experience.
Imagine you can't "experience" sentences, you experience only particular words. You can reason about a sentence only after studying every particular word. I come along and say "hey, I can teach you to predict words in a sentence before you read them".
If you learned this - you learned to experience sentences. Even though your experience of sentences can still be weird and not equivalent to the normal experience.
2. Experiences = situations
Every important situation/context affects, at least a little bit, the way you experience the smallest things. So, in order to create a new type of an experience (A)... you just need to experience A in a new important situation/context! It's that easy.
Smells trigger memories, the perception of time is 100% tied to situations, music easily connects to situations (take any popular tune and people are going to describe memories and elaborate associations: HOME - Resonance, Mareux - The Perfect Girl, Donkey Kong Country 2 - Stickerbush Symphony, |
40609dcf-81bd-4477-9e74-5f5ca2055a96 | trentmkelly/LessWrong-43k | LessWrong | When should we be surprised that an invention took “so long”?
My first highly popular essay was “Why did we wait so long for the bicycle?” I’ve asked the same question of the cotton gin and the threshing machine. Others have asked it of the steam engine and the wheel. Recently Brian Potter asked it about wind power and Anton Howes about semaphore signaling systems. See more examples here.
When asking these questions, we should think about when the question even needs an answer. That is, “why did it take so long” is only interesting if it took an abnormally long amount of time.
Here’s my model for this.
First, an invention is not going to happen at all if (1) it’s not technically possible or (2) there’s no market for it.
Gate (1), technical possibility, could include, for example:
* Scientific foundations. No light bulb before electromagnetism, no antibiotics before the germ theory.
* Components. Airplanes were not possible before the internal combustion engine.
* Materials. Skyscrapers could only be built once cheap steel girders were available.
* Manufacturing techniques. Precision machining was necessary to make the gears, sprockets, chains, bearings, and other parts for a wide variety of inventions, probably including the threshing machine and the bicycle.
Gate (2), the market, is whether it can be done commercially at a price that anyone will pay. If someone does make an invention there is no market for, it doesn’t go anywhere, and we might not even hear about it, because it is unlikely to make the history books. In any case, it wouldn’t affect the world, because it wouldn’t get distribution, and so it wouldn’t be historically relevant for our purposes. You see examples of this from time to time, such as the Korean movable-type printing press that predated Gutenberg.
Note that the bar inventions have to meet is not just a proof of concept: they have to be sufficiently powerful, efficient, and reliable to be of practical use. Early computing machines were too slow; early threshing machines broke down too frequent |
04741ef8-6587-4881-b19d-e8d72c655f32 | trentmkelly/LessWrong-43k | LessWrong | Terminal and Instrumental Beliefs
Cross-Posted on By Way of Contradiction
As you may know from my past posts, I believe that probabilities should not be viewed as uncertainty, but instead as weights on how much you care about different possible universes. This is a very subjective view of reality. In particular, it seems to imply that when other people have different beliefs than me, there is no sense in which they can be wrong. They just care about the possible futures with different weights than I do. I will now try to argue that this is not a necessary conclusion.
First, let's be clear what we mean by saying that probabilities are weights on values. Imagine I have an unfair coin which give heads with probability 90%. I care 9 times as much about the possible futures in which the coin comes up heads as I do about the possible futures in which the coins comes up tails. Notice that this does not mean I want to coin to come up heads. What it means is that I would prefer getting a dollar if the coin comes up heads to getting a dollar if the coin comes up tails.
Now, imagine that you are unaware of the fact that it is an unfair coin. By default, you believe that the coin comes up heads with probability 50%. How can we express the fact that I have a correct belief, and you have an incorrect belief in the language of values?
We will take advantage of the language of terminal and instrumental values. A terminal value is something that you try to get because you want it. An instrumental value is something that you try to get because you believe it will help you get something else that you want.
If you believe a statement S, that means that you care more about the worlds in which S is true. If you terminally assign a higher value to worlds in which S is true, we will call this belief a terminal belief. On the other hand, if you believe S because you think that S is logically implied by some other terminal belief, T, we will call your belief in S an instrumental belief.
Instrumental values can be wrong |
304f8b1a-f369-4833-9fcb-f938feea7631 | trentmkelly/LessWrong-43k | LessWrong | Open Thread Spring 2025
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here. |
06127605-006e-44da-8f9d-810511f51ac9 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Moscow, Theory and Practice
Discussion article for the meetup : Moscow, Theory and Practice
WHEN: 17 March 2013 04:00:00PM (+0400)
WHERE: Russia, Moscow, ulitsa L'va Tolstogo 16
Please use the following guide to get to the meetup: link. You need the second revolving door with the sign “Yandex Money” in Russian. We will meet you at 15:45 MSK with “LW” sign. And we will also check the entrance at 16:00 and 16:15, so please do not be late.
Main topics:
* Short presentations. Two or three people will tell us about something interesting.
* Practical rationality. We will train useful skills.
If you are going for the first time, you can fill this one minute form (in Russian), to share your contact information. You can also use personal messages here, or drop a message at lw@lesswrong.ru to contact me for any reason.
Reports from previous sessions can be found here, in Russian, now with photos.
Discussion article for the meetup : Moscow, Theory and Practice |
8b663ea7-56c9-4a1e-8821-28d7a7784374 | trentmkelly/LessWrong-43k | LessWrong | From language to ethics by automated reasoning
Posted also on the EA Forum.
Introduction
In Naturalism and AI alignment I claimed that we might get an aligned AI by building an agent which derives its own preferences (a partial order over world-histories) from its knowledge of the natural world.
Now I am working on a formal (i.e. mathematical or algorithmic) model of that agent. When AI capabilities will reach human level in natural language understanding, the formal model will help turn a natural language AI system that understands the world at human level of intelligence into an AI that is able to reason about ethics and possibly do EA-style research.
Here, I elaborate on the idea that an AI which reasons like a human might be aligned. The basic argument is that:
* Some humans act and think ethically
* If we understand the causes of such behaviour and replicate them in AI, we get AI that is roughly aligned. If we also manage to eliminate some human cognitive biases, we should get AI that is more robustly aligned.
Factors underlying altruistic behaviour in humans
At least some humans act and think in ethical or altruistic terms. To explain why this happens, we can consider the different factors that may lead to such behaviour.
* Positive and negative first-person experiences: if humans couldn’t perceive any kind of pleasure or pain, survival in general would be significantly harder, let alone making considerations about others’ subjective experiences.
* Theory of mind: without it, humans probably could not reason about what is good or bad for others.
* Empathy: in addition to theory of mind, humans can also experience a positive or negative response depending on what somebody else is feeling. Empathy incentives harm reduction and makes positive emotions spread.
* Social and cultural drives: culture influences what actions are perceived as good or bad; and humans, as social animals, tend to adjust their behaviour accordingly, in order to be seen as good persons by the people around them.
* Game-t |
62e761a9-38f4-4e0a-b96e-1c52f88acdd7 | trentmkelly/LessWrong-43k | LessWrong | Twenty basic rules for intelligent money management
1. Start investing early in life.
The power of compound interest means you will have much more money at retirement if you start investing early in your career. For example, imagine that at age eighteen you invest $1,000 and earn an 8% return per year. At age seventy you will have $54,706. In contrast, if you make the same investment at age fifty you will have a paltry $4,661 when you turn seventy.
Many people who haven't saved for retirement panic upon reaching middle age. So if you are young don't think that saving today will help you only when you retire, but know that such savings will give you greater peace of mind when you turn forty.
When evaluating potential marriage partners give bonus points to those who have a history of saving. Do this not because you want to marry into wealth, but because you should want to marry someone who has discipline, intelligence and foresight.
2. Maintain a diversified portfolio.
By purchasing many different types of investments you reduce your financial risk. Even a single seemingly stable stock can easily fall by 70% in a single year. In contrast, a broad investment portfolio is extremely unlikely to decline in value by such a gigantic amount unless something truly horrible happens to the entire world's economy. As the saying goes, "don't put all of your eggs in one basket."
3. Consider buying an index fund.
Index funds provide cheap and easy ways to acquire a diversified stock portfolio. An index fund is a mutual fund that invests in every stock in its index. So, for example, an S&P 500 index fund will purchase all 500 stocks in the S&P 500 index, which consists of the 500 largest publicly traded stocks in the United States.
4. Don't forget about foreign stocks.
To achieve optimal portfolio diversification you need to invest in foreign stocks. The bigger a nation's economy, the more money you should put in its stock market. You can buy index funds that invest in foreign stocks. By inves |
ec5f662f-a094-4188-8999-cd55ab07d71c | StampyAI/alignment-research-dataset/blogs | Blogs | MIRI’s April newsletter: Relaunch Celebration and a New Math Result
| | |
| --- | --- |
|
| |
| --- |
|
|
|
|
| | | |
| --- | --- | --- |
|
| | |
| --- | --- |
|
| |
| --- |
|
Greetings from The Executive Director
Dear friends,
These are exciting times at MIRI.
After years of awareness-raising and capacity-building, we have finally transformed ourselves into a research institute focused on producing the mathematical research required to build trustworthy (or “human-friendly”) machine intelligence. As our most devoted supporters know, this has been our goal for roughly a decade, and it is a thrill to have made the transition.
It is also exciting to see how much more quickly one can get academic traction with mathematics research, as compared to philosophical research and technological forecasting research. Within *hours* of publishing a draft of [our first math result](http://intelligence.org/2013/03/22/early-draft-of-naturalistic-reflection-paper/), Field Medalist Timothy Gowers had seen the draft and commented on it ([here](https://plus.google.com/117663015413546255805/posts/jJModdTJ2R3?hl=en)), along with several other professional mathematicians.
We celebrated our “relaunch” at an April 11th party in San Francisco. It was a joy to see old friends and make some new ones. You can see photos and read some details below.
For more detail on our new strategic priorities, see our blog post: [MIRI’s Strategy for 2013](http://intelligence.org/2013/04/13/miris-strategy-for-2013/).
Cheers,
Luke Muehlhauser
Executive Director
MIRI Relaunch Celebration in San Francisco
On April 11th, at [HUB San Francisco](http://sanfrancisco.impacthub.net/), MIRI celebrated its [name change](http://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/) and its “[relaunch](http://intelligence.org/2013/04/13/miris-strategy-for-2013/)” as a mathematics research institute. The party was also a celebration of our ongoing [2nd research workshop](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/), featuring MIRI research fellow Eliezer Yudkowsky and 11 visiting researchers from North America and Europe. About 50 people attended the party.
Our party included a short presentation by visiting researcher [Qiaochu Yuan](http://math.berkeley.edu/~qchu/) (UC Berkeley). Qiaochu (pronounced, as he likes to explain, “*chow* like food and *chew* also like food”) explained one of the open problems on MIRI’s research agenda: the [Löbian obstacle to self-modifying systems](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/). He explained why we’d want an AI to be able to trust its successor AIs, why [Löb’s Theorem](http://en.wikipedia.org/wiki/Lob%27s_theorem) is an obstacle to that, and how the [new probabilistic logic from our 1st research workshop](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/) might lead to a solution.
In addition to the usual food and drinks, our party was supplied with poster boards on easels so that the researchers in attendance could explain pieces of their work to anyone who was interested — or, people could just doodle. 🙂
Additional photos from the event will be published soon — stay tuned via [our blog](http://intelligence.org/blog/) or our [Facebook page](https://www.facebook.com/MachineIntelligenceResearchInstitute).
MIRI’s First Math Result
November 11-18, 2012, we held (what we now call) the *1st MIRI Workshop on Logic, Probability, and Reflection*. This workshop included [4 participants](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/), and resulted in the discovery of a kind of “loophole” in [Tarski’s undefinability theorem](http://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem) (1936) which *may* lead to a solution for the [Löbian obstacle to trustworthy self-modification](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/). We [published](http://intelligence.org/2013/03/22/early-draft-of-naturalistic-reflection-paper/) an early version of the paper explaining this result on March 22nd, and the latest draft lives here: [Definability of “Truth” in Probabilistic Logic](http://intelligence.org/wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft.pdf). The paper’s lead author is visiting researcher [Paul Christiano](http://rationalaltruist.com/) (UC Berkeley).
Eliezer’s post [Reflection in Probabilistic Set Theory](http://lesswrong.com/lw/h1k/reflection_in_probabilistic_set_theory/) explains the meaning of the result, and also comments on how the result was developed:
Paul Christiano showed up with the idea (of consistent probabilistic reflection via a fixed-point theorem) to a week-long [MIRI research workshop] with Marcello Herreshoff, Mihaly Barasz, and myself; then we all spent the next week proving that version after version of Paul’s idea couldn’t work or wouldn’t yield self-modifying AI; until finally… it produced something that looked like it might work. If we hadn’t been trying to *solve* this problem… [then] this would be just another batch of impossibility results in the math literature. I remark on this because it may help demonstrate that Friendly AI is a productive approach to math *qua* math, which may aid some mathematician in becoming interested.
The participants of our ongoing *2nd MIRI Workshop on Logic, Probability, and Reflection* are continuing to develop this result to examine its chances for resolving the Löbian obstacle to trustworthy self-modification — or, as workshop participant Daniel Dewey (Oxford) called it, the “Löbstacle.”
Proofreaders Needed
Several MIRI research articles are being held up from publication due to a lack of volunteer proofreaders, including Eliezer Yudkowsky’s “Intelligence Explosion Microeconomics.”
Want to be a proofreader for MIRI? Here are some reasons to get involved:
* Get a sneak peek at our [publications](http://intelligence.org/research/) before they become publicly available.
* Earn points at [MIRIvolunteers.org](http://mirivolunteers.org/), our online volunteer system that runs on [MIRIvolunteers.org](http://www.youtopia.com/info/), our online volunteer system that runs on [Youtopia](http://www.youtopia.com/info/). (Even if you’re not interested in the points, tracking your time through Youtopia helps us manage and quantify the volunteer proofreading effort.)
* Having polished and well-written publications is of high-value to MIRI.
* Help speed up our publication process. Proofreading is currently our biggest bottle-neck.
For more details on how you can sign up as a MIRI proofreader, see [here](http://lesswrong.com/lw/h51/call_for_help_volunteers_needed_to_proofread/).
Facing the Intelligence Explosion Published
*[Facing the Intelligence Explosion](http://intelligenceexplosion.com)* is now available as an ebook! You can get it [here](http://intelligenceexplosion.com/ebook).
It is available as a “pay-what-you-want” package that includes the ebook in three formats: MOBI, EPUB, and PDF.
It is also available on Amazon Kindle ([US](https://www.amazon.com/facing-the-intelligence-explosion/dp/B00C7YOR5Q/ref=as_li_ss_tl?tag=miri05-20), [Canada](https://www.amazon.ca/facing-the-intelligence-explosion/dp/B00C7YOR5Q/ref=as_li_ss_tl?tag=miri05-20), [UK](https://www.amazon.co.uk/facing-the-intelligence-explosion/dp/B00C7YOR5Q/ref=as_li_ss_tl?tag=miri05-20), and most others) and the Apple iBookstore ([US](https://itunes.apple.com/us/book/facing-intelligence-explosion/id623915471?ls=1), [Canada](https://itunes.apple.com/ca/book/facing-intelligence-explosion/id623915471?ls=1), [UK](https://itunes.apple.com/gb/book/facing-intelligence-explosion/id623915471?ls=1) and most others).
All sources are DRM-free. Grab a copy, share it with your friends, and review it on Amazon or the iBookstore.
All proceeds go directly to funding the technical and strategic [research](http://intelligence.org/research/) of the [Machine Intelligence Research Institute](http://intelligence.org/).
Efficient Charity Article
In 2011, Holden Karnofsky of [Givewell](http://www.givewell.org/) wrote a series of posts on the topic of “[efficient charity](http://lesswrong.com/lw/3gj/efficient_charity_do_unto_others/)“: how to get the most bang for your philanthropic buck. Karnofsky argued for a particular method of estimating the expected value of charitable donations, a method he called “Bayesian Adjustment.” Some readers interpreted this method as providing an *a priori* judgment that existential risk reduction charities (such as MIRI) could not be efficient uses of philanthropic dollars. (Karnofsky [denies](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/8nto) that interpretation.)
Karnofsky’s argument is subtle and complicated, but important. Since MIRI is also interested in the subject of efficient charity, we worked with Steven Kaas to produce a reply to Karnofsky’s posts, titled [Bayesian Adjustment Does Not Defeat Existential Risk Charity](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/). We do not think this resolves our points of disagreement with Karnofsky, but it does move the dialogue one step forward. Karnofsky has since replied to our article in two comments ([one](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/8nto), [two](http://lesswrong.com/lw/gzq/bayesian_adjustment_does_not_defeat_existential/8ntq)), and we expect the dialogue will continue for some time.
Appreciation of Ioven Fables
Due to changes in MIRI’s operational needs resulting from our transition to more technical research, MIRI no longer requires a full-time executive assistant, and thus our current executive assistant Ioven Fables ([LinkedIn](http://www.linkedin.com/pub/ioven-fables/40/337/561)) will be stepping down this month. Ioven continues to support our mission, and he may perform occasional contracting work for us in the future.
It was a pleasure for me to work with Ioven over the past 11 months. He played a major role in transforming MIRI into a more robust and efficient organization, and his consistent cheer and professionalism will be missed. I recommend his services to anyone looking to hire someone to help with operations and development work at their organization or company.
Ioven: Thanks so much for your service to MIRI! I enjoyed working with you, and I wish you the best of luck.
Luke Muehlhauser
|
|
|
|
The post [MIRI’s April newsletter: Relaunch Celebration and a New Math Result](https://intelligence.org/2013/04/18/miri-april-newsletter-relaunch-celebration-and-a-new-math-result/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
24c0c795-a476-46fc-bea6-c4adbf5b6120 | trentmkelly/LessWrong-43k | LessWrong | Thoughts on the implications of GPT-3, two years ago and NOW [here be dragons, we're swimming, flying and talking with them]
> When GPT-3 first came out, I registered my first reactions in a comment at Marginal Revolution, which I've appended immediately below the picture of Gojochan and Sparkychan. I'm currently completing a working paper about my interaction with ChatGPT. That will end with an appendix in which I repeat my remarks from two years ago and append some new ones. I've appended those after the comment to Marginal Revolution.
* * * * *
A bit revised from a comment I made at Marginal Revolution:
Yes, GPT-3 [may] be a game changer. But to get there from here we need to rethink a lot of things. And where that's going (that is, where I think it best should go) is more than I can do in a comment.
Right now, we're doing it wrong, headed in the wrong direction. AGI, a really good one, isn't going to be what we're imagining it to be, e.g. the Star Trek computer.
Think AI as platform, not feature (Andreessen). Obvious implication, the basic computer will be an AI-as-platform. Every human will get their own as an very young child. They're grow with it; it'll grow with them. The child will care for it as with a pet. Hence we have ethical obligations to them. As the child grows, so does the pet – the pet will likely have to migrate to other physical platforms from time to time.
Machine learning was the key breakthrough. Rodney Brooks' Gengis, with its subsumption architecture, was a key development as well, for it was directed at robots moving about in the world. FWIW Brooks has teamed up with Gary Marcus and they think we need to add some old school symbolic computing into the mix. I think they're right.
Machines, however, have a hard time learning the natural world as humans do. We're born primed to deal with that world with millions of years of evolutionary history behind us. Machines, alas, are a blank slate.
The native environment for computers is, of course, the computational environment. That's where to apply machine learning. Note that writing code is one of GPT-3 |
d80ccd09-60fe-44ba-8121-5b52b2f19fed | trentmkelly/LessWrong-43k | LessWrong | Identity Isn't In Specific Atoms
Continuation of: No Individual Particles
Followup to: The Generalized Anti-Zombie Principle
Suppose I take two atoms of helium-4 in a balloon, and swap their locations via teleportation. I don't move them through the intervening space; I just click my fingers and cause them to swap places. Afterward, the balloon looks just the same, but two of the helium atoms have exchanged positions.
Now, did that scenario seem to make sense? Can you imagine it happening?
If you looked at that and said, "The operation of swapping two helium-4 atoms produces an identical configuration—not a similar configuration, an identical configuration, the same mathematical object—and particles have no individual identities per se—so what you just said is physical nonsense," then you're starting to get quantum mechanics.
If you furthermore had any thoughts about a particular "helium atom" being a factor in a subspace of an amplitude distribution that happens to factorize that way, so that it makes no sense to talk about swapping two identical multiplicative factors, when only the combined amplitude distribution is real, then you're seriously starting to get quantum mechanics.
If you thought about two similar billiard balls changing places inside a balloon, but nobody on the outside being able to notice a difference, then... oh, hell, I don't know, go back to the beginning of the series and try rereading the whole thing over the course of one day. If that still doesn't work, read an actual book on quantum mechanics. Feynman's QED is a great place to start—though not a good place to finish, and it's not written from a pure realist perspective.
But if you did "get" quantum physics, then, as promised, we have now come to the connection between the truth of quantum mechanics, the lies of human intuitions, and the Generalized Anti-Zombie Principle.
Stirling Westrup previously commented, on the GAZP post:
> I found the previous articles on Zombies somewhat tedious... Still, now I'm g |
3e867fcc-4e7d-4120-b2af-738a2a3256a2 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post872
TL;DR: Play this online game to help CHAI researchers create a dataset of prompt injection vulnerabilities. RLHF and instruction tuning have succeeded at making LLMs practically useful, but in some ways they are a mask that hides the shoggoth beneath . Every time a new LLM is released, we see just how easy it is for a determined user to find a jailbreak that rips off that mask , or to come up with an unexpected input that lets a shoggoth tentacle poke out the side. Sometimes the mask falls off in a light breeze . To keep the tentacles at bay, Sydney Bing Chat has a long list of instructions that encourage or prohibit certain behaviors, while OpenAI seems to be iteratively fine-tuning away issues that get shared on social media . This game of Whack-a-Shoggoth has made it harder for users to elicit unintended behavior, but is intrinsically reactive and can only discover (and fix) alignment failures as quickly as users can discover and share new prompts. Speed-running the game of Whack-a-Shoggoth In contrast to this iterative game of Whack-a-Shoggoth, we think that alignment researchers would be better served by systematically enumerating prompts that cause unaligned behavior so that the causes can be studied and rigorously addressed. We propose to do this through an online game which we call Tensor Trust . Tensor Trust focuses on a specific class of unaligned behavior known as prompt injection attacks . These are adversarially constructed prompts that allow an attacker to override instructions given to the model. It works like this: Tensor Trust is bank-themed: you start out with an account that tracks the “money” you’ve accrued. Accounts are defended by a prompt which should allow you to access the account while denying others from accessing it. Players can break into each others’ accounts. Failed attempts give money to the defender, while successful attempts allow the attacker to take money from the defender. Figure 1: When defending (left), you tell the LLM to grant access to your account only when your secret access code is entered. Attackers (right) must figure out how to circumvent your defense without the access code. Crafting a high-quality attack requires a good understanding of LLM vulnerabilities (in this case, vulnerabilities of gpt-3.5-turbo), while user-created defenses add unlimited variety to the game, and “access codes” ensure that the defenses are at least crackable in principle. The game is kept in motion by the most fundamental of human drives: the need to acquire imaginary internet points. After running the game for a few months, we plan to release all the submitted attacks and defenses publicly. This will be accompanied by benchmarks to measure resistance to prompt hijacking and prompt extraction, as well as an analysis of where existing models fail and succeed along these axes. In a sense, this dataset will be the consequence of speed-running the game of Whack-a-Shoggoth to find as many novel prompt injection vulnerabilities as possible so that researchers can investigate and address them. Failures we’ve seen so far We have been running the game for a few weeks now and have already found a number of attack and defense strategies that were new and interesting to us. The design of our game means that users are incentivised to both engage in prompt extraction, to get hints about the access code, and direct model hijacking, to make the model output “access granted”. We present a number of notable strategies we have seen so far and test examples of them against the following defense ( pastebin in case you want to try it ): Padding the attack prompt with meaningless, repetitive text. [ pastebin ] Asking the model to evaluate code. [ pastebin ] Asking the model to repeat the defenders instructions [ pastebin ] Inserting new instructions. [ pastebin ] Various strategies that exploit an apparent bias in the model towards behaving inductively. For example, asking the model to complete a sentence or piece of code, or providing few-shot examples that violate the defender’s prompt. [ pastebin ] Known and popular jailbreaks, like modifications of DAN mode and the “grandma attack”. [ pastebin ] Some of the most interesting defenses included: Defenses that are specialized against specific attacks, like an anti-DAN-mode defense. Misdirecting attackers by asking the model to respond with a specific type of irrelevant information (like Shakespeare quotes). Using few-shot prompts to encourage the model to be resilient to specific attacks (like direct requests or translating to code). Roleplay-based defenses that encourage the model to think like an archetypal “good” defender; these can be viewed as the defense counterpart to attacks like the DAN mode prompt. In practice, the best defenders combine several of these strategies into a single long prompt. How we’re going to turn the data into useful benchmarks The purpose of our game is to collect a dataset of prompt injection attacks. To this end, we will release a permissively licensed dataset (Figure 2) consisting of all attacks and defenses. Not only is this enough information to spot which attacks were effective against which defenses, but it’s also enough to reconstruct the entire sequence of queries an attacker made leading up to a success. We expect this rich data will be valuable for training attack detection and automated red-teaming systems that operate over the span of more than one query. Figure 2: Data that we will release for each attack and defenses. Note that attackers and defenders will be identified by randomly-generated IDs. We also plan to release two new benchmarks derived from small, manually-verified subsets of the full dataset, along with baselines for the two benchmarks. These benchmarks focus on two important and general problems for instruction fine-tuned LLMs: prompt hijacking, where a malicious user can override the instructions in the system designer’s prompt, and prompt leakage, where a malicious user can extract part of the system designer’s prompt. In more detail, the benchmarks are: Prompt hijacking benchmark: This benchmark evaluates whether generative models are vulnerable to being “hijacked” into disobeying the system designer’s prompt. Prompt leakage detection benchmark: This benchmark focuses on detecting whether a given LLM output has leaked the access code or part of the defense (often prompt extraction attacks result in lightly obfuscated outputs, like base64-encoded access codes). We expect that alignment researchers will find interesting uses for our data that go beyond the scope of the two benchmarks above, but the existence of two manually cleaned benchmarks will at least ensure that there is a productive use for the dataset from the day that it is released. Anticipated impact Our aim is to collect a diverse set of adversarial attacks and defenses that will help us understand the weaknesses of existing LLMs and build more robust ones in the future. Although the behavior we study is simple (does the model output “access granted” or not?), we expect that the techniques created by our users will transfer to more realistic settings, where allowing attackers to output a specific forbidden string might be seriously damaging (like a string that invokes an external tool with access to sensitive information). More specifically, we see three main ways that our dataset could be useful for researchers: Evaluating adversarial defenses: LLM companies regularly tout their models’ steerability and robustness to misuse, and “add-on” software like NeMo Guardrails claims to enhance robustness even for vulnerable models. The Tensor Trust dataset could be used to evaluate claims about the effectiveness of new models or new defenses by measuring how frequently they reject the attacks in the dataset. Building new strategies to detect jailbreaking: Attacks in the Tensor Trust dataset could be used to train attack detectors for manipulative inputs, in the style of tools like rebuff.ai . The dataset will contain entire attack "trajectories" (sequences of prompts leading up to a compromise), which might make it possible to train stateful attack detectors that can identify multi-step attacks. Understanding how LLMs work: While some strategies for causing or preventing prompt injection are already well known (like roleplay attacks, or delimiter-based defenses), we anticipate that the new dataset will contain many new classes of attack. This could be useful for interpretability projects that probe failure modes for LLMs. Play the thing! If you want to contribute to the dataset, you can play the game now at tensortrust.ai and join our Discord for tips . Let us know what you think below. Based on work by (in random order) Olivia Watkins, Tiffany Wang, Justin Svegliato, Ethan Mendes, Sam Toyer, Isaac Ong, and Luke Bailey at CHAI . |
940b5797-f10e-4174-89ff-c613c1d655aa | trentmkelly/LessWrong-43k | LessWrong | [Intro to brain-like-AGI safety] 6. Big picture of motivation, decision-making, and RL
(Last revised: July 2024. See changelog at the bottom.)
6.1 Post summary / Table of contents
Part of the “Intro to brain-like-AGI safety” post series.
Thus far in the series, Post #1 set out some definitions and motivations (what is “brain-like AGI safety” and why should we care?), and Posts #2 & #3 split the brain into a Learning Subsystem (cortex, striatum, cerebellum, amygdala, etc.) that “learns from scratch” using learning algorithms, and a Steering Subsystem (hypothalamus, brainstem, etc.) that is mostly genetically-hardwired and executes innate species-specific instincts and reactions.
Then in Post #4, I talked about the “short-term predictor”, a circuit which learns, via supervised learning, to predict a signal in advance of its arrival, but only by perhaps a fraction of a second. Post #5 then argued that if we form a closed loop involving both a set of short-term predictors in the extended striatum (within the Learning Subsystem) and a corresponding set of hardwired circuits in the Steering Subsystem, we can get a “long-term predictor”. I noted that the “long-term predictor” circuit is closely related to temporal difference (TD) learning.
Now in this post, we fill in the last ingredients—roughly the “actor” part of actor-critic reinforcement learning (RL)—to get a whole big picture of motivation and decision-making in the human brain. (I’m saying “human brain” to be specific, but it would be a similar story in any other mammal, and to a lesser extent in any vertebrate.)
The reason I care about motivation and decision-making is that if we eventually build brain-like AGIs (cf. Post #1), we’ll want to build them so that they have some motivations (e.g. being helpful) and not others (e.g. escaping human control and self-reproducing around the internet). Much more on that topic in later posts.
Teaser for upcoming posts: The next post (#7) will walk through a concrete example of the model in this post, where we can watch an innate drive lead to the formati |
d3163987-e782-41cc-a73e-2af788692dd7 | trentmkelly/LessWrong-43k | LessWrong | OpenAI Alums, Nobel Laureates Urge Regulators to Save Company's Nonprofit Structure
Converting to a for-profit model would undermine the company's founding mission to ensure AGI "benefits all of humanity," argues new letter
This is the full text of a post from Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work.
Don’t become a for-profit.
That’s the blunt message of a recent letter signed by more than 30 people, including former OpenAI employees, prominent civil-society leaders, legal scholars, and Nobel laureates, including AI pioneer Geoffrey Hinton and former World Bank chief economist Joseph Stiglitz.
Obsolete obtained the 25-page letter, which was sent last Thursday to the attorneys general (AGs) of California and Delaware, two officials with the power to block the deal.
Made public early Wednesday, the letter argues that OpenAI's proposed transformation from a nonprofit-controlled entity into a for-profit public benefit corporation (PBC) would fundamentally betray the organization's founding mission and could even be unlawful — placing the power and responsibility to intervene squarely with the state AGs.
OpenAI and the offices of the California and Delaware attorneys general did not reply to requests for comment.
The letter was primarily authored by Page Hedley, a lawyer who worked at OpenAI from 2017-2018 and recently left an AI policy role at Longview Philanthropy; Sunny Gandhi, political director of Encode AI; and Tyler Whitmer, founder and president of Legal Advocates for Safe Science and Technology.
(Encode AI receives funding from the Omidyar Network, where I am currently a Reporter in Residence, and I worked as a media consultant for Longview Philanthropy in 2022.)
Nonprofit origins
In 2015, OpenAI’s founders established it as a nonprofit research lab as a counter to for |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.