id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
e1de06db-f502-4506-bfd8-ba2acfb3c28c | trentmkelly/LessWrong-43k | LessWrong | Assessing Kurzweil: the results
Predictions of the future rely, to a much greater extent than in most fields, on the personal judgement of the expert making them. Just one problem - personal expert judgement generally sucks, especially when the experts don't receive immediate feedback on their hits and misses. Formal models perform better than experts, but when talking about unprecedented future events such as nanotechnology or AI, the choice of the model is also dependent on expert judgement.
Ray Kurzweil has a model of technological intelligence development where, broadly speaking, evolution, pre-computer technological development, post-computer technological development and future AIs all fit into the same exponential increase. When assessing the validity of that model, we could look at Kurzweil's credentials, and maybe compare them with those of his critics - but Kurzweil has given us something even better than credentials, and that's a track record. In various books, he's made predictions about what would happen in 2009, and we're now in a position to judge their accuracy. I haven't been satisfied by the various accuracy ratings I've found online, so I decided to do my own assessments.
I first selected ten of Kurzweil's predictions at random, and gave my own estimation of their accuracy. I found that five were to some extent true, four were to some extent false, and one was unclassifiable
But of course, relying on a single assessor is unreliable, especially when some of the judgements are subjective. So I started a call for volunteers to get assessors. Meanwhile Malo Bourgon set up a separate assessment on Youtopia, harnessing the awesome power of altruists chasing after points.
The results are now in, and they are fascinating. They are...
Ooops, you thought you'd get the results right away? No, before that, as in an Oscar night, I first want to thank assessors William Naaktgeboren, Eric Herboso, Michael Dickens, Ben Sterrett, Mao Shan, quinox, Olivia Schaefer, David Sønstebø and one wh |
8ac9f882-9429-4116-80bb-410f0bd1fc1e | StampyAI/alignment-research-dataset/special_docs | Other | Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems
Proxy Tasks and Subjective Measures Can Be Misleading in
Evaluating Explainable AI Systems
Zana Buçinca∗
Harvard University
Cambridge, Massachusetts
zbucinca@seas.harvard.eduPhoebe Lin∗
Harvard University
Cambridge, Massachusetts
phoebelin@gsd.harvard.edu
Krzysztof Z. Gajos
Harvard University
Cambridge, Massachusetts
kgajos@eecs.harvard.eduElena L. Glassman
Harvard University
Cambridge, Massachusetts
glassman@seas.harvard.edu
ABSTRACT
Explainable artificially intelligent (XAI) systems form part of so-
ciotechnical systems, e.g., human+AI teams tasked with making
decisions. Yet, current XAI systems are rarely evaluated by measur-
ing the performance of human+AI teams on actual decision-making
tasks. We conducted two online experiments and one in-person
think-aloud study to evaluate two currently common techniques for
evaluating XAI systems: (1) using proxy, artificial tasks such as how
well humans predict the AI’s decision from the given explanations,
and (2) using subjective measures of trust and preference as predic-
tors of actual performance. The results of our experiments demon-
strate that evaluations with proxy tasks did not predict the results of
the evaluations with the actual decision-making tasks. Further, the
subjective measures on evaluations with actual decision-making
tasks did not predict the objective performance on those same tasks.
Our results suggest that by employing misleading evaluation meth-
ods, our field may be inadvertently slowing its progress toward
developing human+AI teams that can reliably perform better than
humans or AIs alone.
CCS CONCEPTS
•Human-centered computing →Interaction design ; Empiri-
cal studies in interaction design.
KEYWORDS
explanations, artificial intelligence, trust
ACM Reference Format:
Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. 2020.
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating
Explainable AI Systems. In IUI’20: ACM Proceedings of the 25th Conference
\* equal contribution.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a
fee. Request permissions from permissions@acm.org.
IUI’20, March 17–20, 2020, Cagliari, Italy
©2020 Association for Computing Machinery.
ACM ISBN 978-1-4503-7118-6/20/03. . .$15.00
https://doi.org/10.1145/3377325.3377498on Intelligent User Interfaces, March 17–20, 2020, Cagliari, Italy. ACM, New
York, NY, USA, 11 pages. https://doi.org/10.1145/3377325.3377498
1 INTRODUCTION
Because people and AI-powered systems have complementary
strengths, many expected that human+AI teams would perform
better on decision-making tasks than either people or AIs alone [ 1,
21,22]. However, there is mounting evidence that human+AI teams
often perform worse than AIs alone [16, 17, 28, 34].
We hypothesize that this mismatch between our field’s aspira-
tions and the current reality can be attributed, in part, to several
pragmatic decisions we frequently make in our research practice.
Specifically, although our aspiration is formulated at the level of
sociotechnical systems , i.e., human+AI teams working together to
make complex decisions, we often make one of two possible critical
mistakes: (1) Rather than evaluating how well the human+AI team
performs together on a decision-making task, we evaluate by using
proxy tasks, how accurately a human can predict the decision or
decision boundaries of the AI [ 13,27,29,34]. (2) We rely on sub-
jective measures of trust and preference, e.g., [ 35,36,44], instead of
objective measures of performance. We consider each of these two
concerns in turn.
First, evaluations that use proxy tasks force study participants
to pay attention to the AI and the accompanying explanations—
something that they are unlikely to do when performing a realistic
decision-making task. Cognitive science provides compelling evi-
dence that people treat cognition like any other form of labor [ 24]
and favor less demanding forms of cognition, i.e., heuristics over
analytical thinking, even in high stakes contexts like medical diag-
nosis [ 31]. Therefore, we hypothesize that user performance and
preference on proxy tasks may not accurately predict their perfor-
mance and preference on the actual decision-making tasks where
their cognitive focus is elsewhere and they can choose whether and
how much to attend to the AI.
Second, subjective measures such as trust and preference have
been embraced as the focal point for the evaluation of explainable
systems [ 35,36,44], but we hypothesize that subjective measures
may also be poor predictors of the ultimate performance of people
performing realistic decision-making tasks while supported by ex-
plainable AI-powered systems. Preference and trust are important
facets of explainable AI systems: they may predict users’ intent
to attend to the AI and its explanations in realistic tasks settings
454
IUI’20, March 17–20, 2020, Cagliari, Italy Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman
and adhere to the system’s recommendations. However, the goal of
explainable interfaces should be instilling in users the right amount
of trust [ 10,32,33]. This remains a remarkable challenge, as on one
end of the trust spectrum users might over-rely on the system and
remain oblivious of its errors, whereas on the other end they might
exhibit self-reliance and ignore the system’s correct recommenda-
tions. Furthermore, evaluating an AI’s decision, its explanation of
that decision, and incorporating that information into the decision-
making process requires cognitive effort and the existing evidence
suggests that preference does not predict performance on cognitive
tasks [8, 12, 37].
To evaluate these two hypotheses, we conducted two online
experiments and one in-person study of an AI-powered decision
support system for a nutrition-related decision-making task. In
one online study we used a proxy task, in which participants were
asked to predict the AI’s recommendations given the explanations
produced by the explainable AI system. In the second online study,
participants completed an actual decision-making task: actually
making decisions assisted by the same explainable AI system as in
the first study. In both studies, we measured participants’ objective
performance and collected subjective measures of trust, preference,
mental demand, and understanding. In the in-person study, we used
a think-aloud method to gain insights into how people reason while
making decisions assisted by an explainable AI system. In each
study, we presented participants with two substantially distinct
explanation types eliciting either deductive or inductive reasoning.
The results of these studies indicate that (1) subjective measures
from the proxy task do not generalize to the actual decision-making
task, and (2) when using actual decision-making tasks, subjective
results do not predict objective performance results. Specifically,
participants trusted and preferred inductive explanations in the
proxy task, whereas they trusted and preferred the deductive ex-
planations in the actual task. Second, in the actual decision-making
task, participants recognized AI errors better with inductive expla-
nations, yet they preferred and trusted the deductive explanations
more. The in-person think-aloud study revealed insights about
why participants preferred and trusted one explanation type over
another, but we found that by thinking aloud during an actual
decision-making task, participants may be induced to exert ad-
ditional cognitive effort, and behave differently than they would
during an actual decision-making task when they are, more realis-
tically, not thinking aloud.
In summary, we show that the results of evaluating explain-
able AI systems using proxy tasks may not predict the results of
evaluations using actual decision-making tasks. Users also do not
necessarily perform better with systems that they prefer and trust
more. To draw correct conclusions from empirical studies, explain-
able AI researchers should be wary of evaluation pitfalls, such as
proxy tasks and subjective measures. Thus, as we recognize that
explainable AI technology forms part of sociotechnical systems,
and as we increasingly use these technologies in high-stakes sce-
narios, our evaluation methodologies need to reliably demonstrate
how the entire sociotechnical systems (i.e., human+AI teams) will
perform on real tasks.2 RELATED WORK
2.1 Decision-making and Decision Support
Systems
Decision-making is a fundamental cognitive process that allows
humans to choose one option or course of action from among a set
of alternatives [ 42,43,45]. Since it is an undertaking that requires
cognitive effort, people often employ mental shortcuts, or heuristics,
when making decisions [ 40]. These heuristics save time and effort,
and frequently lead to good outcomes, but in some situations they
result in cognitive biases that systematically lead to poor decisions
(see, e.g., [4]).
To help people make good decision reliably, computer-based De-
cision Support Systems (DSS) have been used across numerous dis-
ciplines (e.g., management [ 15], medicine [ 20], justice [ 47]). While
DSS have been around for a long time, they are now increasingly
being deployed because the recent advancements in AI enabled
these systems to achieve high accuracy. But since humans are the
final arbiters in decisions made with DSS, the overall sociotechincal
system’s accuracy depends both on the system’s accuracy and on
the humans and their underlying cognitive processes. Research
shows that even when supported by a DSS, people are prone to
insert bias into the decision-making process [16].
One approach for mitigating cognitive biases in decision-making
is to use cognitive forcing strategies, which introduce self-awareness
and self-monitoring of decision-making [ 7]. Although not univer-
sally effective [ 38], these strategies have shown promising results as
they improve decision-making performance, both if the human is as-
sisted [ 17,34] or is not assisted by a DSS [ 31]. To illustrate, Green &
Chen [ 17] showed that across different AI-assisted decision-making
treatments, humans performed best when they had to make the
preliminary decision on their own first before being shown the
system recommendation (which forced them to engage analytically
with the system’s recommendation and explanation if their own
preliminary decision differed from that offered by the system). Even
though conceptual frameworks that consider cognitive processes
in decision-making with DSS have been proposed recently [ 41],
further research is needed to thoroughly investigate how to incor-
porate DSS into human decision-making and the effect of cognitive
processes while making system-assisted decisions.
2.2 Evaluating AI-Powered Decision Support
Systems
Motivated by the growing number of studies in interpretable and
explainable AI-powered decision support systems, researchers have
called for more rigorous evaluation of explainable systems [ 9,14,
19]. Notably, Doshi-Velez & Kim [ 9] proposed a taxonomy for eval-
uation of explainable AI systems, composed of the following cate-
gories: application grounded evaluation (i.e., domain experts evalu-
ated on actual tasks), human grounded evaluation (i.e., lay humans
evaluated on simplified tasks) and functionally grounded evalua-
tion (i.e., no humans, proxy tasks). To put our work into context,
our definition of the actual task falls into application grounded
evaluation, where people for whom the system is intended (i.e., not
necessarily experts) are evaluated on the intended task. Whereas,
455
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems IUI’20, March 17–20, 2020, Cagliari, Italy
thethe proxy task is closer to human grounded evaluation but ad-
dresses both domain experts and lay people evaluated on simplified
tasks, such as the simulation of model’s prediction given an input
and an explanation.
Studies using actual tasks evaluate the performance of human
and the system, as a whole, on the decision-making task [ 3,17,23,
46]. In these studies, participants are told to focus on making good
decisions and it is up to them to decide whether and how to use
the AI’s assistance to accomplish the task. In contrast, studies that
use proxy tasks evaluate how well users are able to simulate the
model’s decisions [ 6,13,27,34] or decision boundaries [ 29]. In such
studies, participants are specifically instructed to pay attention
to the AI. These studies evaluate the human’s mental model of
the system when the human is actively attending to the system’s
predictions and explanations, but do not necessarily evaluate how
well the human is able to perform real decision-making tasks with
the system. For example, to identify which factors make a model
more interpretable, Lage et al. ask participants to simulate the
interpretable model’s predictions [27].
In addition to the evaluation task, the choice of evaluation met-
rics is a critical one for the correct evaluation of intelligent sys-
tems [ 2]. In explainable AI literature, subjective measures, such
as user trust and experience, have been largely embraced as the
focal point for the evaluation of explainable systems [ 35,36,44,48].
Hoffman et al. [ 19] proposed metrics for explainable systems that
are grounded in the subjective evaluation of a system (e.g., user
satisfaction, trust, and understanding). These may take the form
of questionnaires on attitude and confidence in the system [ 18]
and helpfulness of the system [ 5,26]. However, while these mea-
sures are informative, evidence suggests they do not necessarily
predict user’s performance with the system. For example, Green
& Chen [ 16] discovered that self-reported measures could be mis-
leading, since participant’s confidence in their performance was
negatively associated with their actual performance. Similarly, Lai
& Tan [ 28] found that humans cannot accurately estimate their
own performance. More closely related to our findings, Poursabzi-
Sangdeh et al. [ 34] observed that even though participants were
significantly more confident on the predictions of one model over
the other, their decisions did not reflect the stated confidence. Fur-
thermore, Lakkaraju & Bastani [ 30] demonstrated that participants
trusted the same underlying biased model almost 10 times more
when they were presented with misleading explanations compared
to the truthful explanations that revealed the model’s bias. These
findings indicate that not only are subjective measures poor predic-
tors of performance, but they can easily be manipulated and lead
users to adhere to biased or malicious systems.
3 EXPERIMENTS
We conducted experiments with two different evaluation tasks and
explanation designs to test the following hypotheses:
H1:Results of widely accepted proxy tasks, where the user is asked
to explicitly engage with the explanations, may not predict the
results of realistic settings where the user’s focus is on the actual
decision-making task.
H2:Subjective measures, such as self-reported trust and preferencewith respect to different explanation designs, may not predict the
ultimate human+AI performance.
3.1 Proxy Task
3.1.1 Task Description. We designed the task around nutrition be-
cause it is generally accessible and plausibly useful in explainable
AI applications for a general audience. Participants were shown a
series of 24 images of different plates of food. The ground truth of
the percent fat content was also shown to them as a fact. Partici-
pants were then asked: “What will the AI decide?” given that the
AI must decide “Is X% or more of the nutrients on this plate fat?”.
As illustrated in Figure 1, each image was accompanied by explana-
tions generated by the simulated AI. The participants chose which
decision they thought the AI would make given the explanations
and the ground truth.
We designed two types of explanations, eliciting either inductive
or deductive reasoning. In inductive reasoning, one infers general
patters from specific observations. Thus, for the inductive explana-
tions, we created example-based explanations that required partici-
pants to recognize the ingredients that contributed to fat content
and draw their own conclusion about the given image. As shown in
Figures 1a, the inductive explanations began with “Here are exam-
ples of plates that the AI knows the fat content of and categorizes
as similar to the one above.” Participants then saw four additional
images of plates of food. In deductive reasoning, in contrast, one
starts with general rules and reaches a conclusion with respect
to a specific situation. Thus, for the deductive explanations, we
provided the general rules that the simulated AI applied to gener-
ate its recommendations. For example, in Figure 1b, the deductive
explanation begins with “Here are ingredients the AI knows the fat
content of and recognized as main nutrients:” followed by a list of
ingredients.
We chose a within-subjects study design, where for one half of
the study session, participants saw inductive explanations and, for
the other half of the study session, they saw deductive explanations.
The order in which the two types of explanations were seen was
counterbalanced. Each AI had an overall accuracy of 75%, which
meant that in 25% of the cases the simulated AI misclassified the
image or misrecognized ingredients (e.g., Figure 1b). The order
of the specific food images was randomized, but all participants
encountered the AI errors at the same positions. We fixed the errors
at questions 4, 7, 11, 16, 22 and 23, though which food the error
was associated to was randomized. We included the ground truth
of the fat content of plates of food, because the main aim of the
proxy task was to measure whether the user builds correct mental
models of the AI and not to assess the actual nutrition expertise of
the participant.
3.1.2 Procedure. This study was conducted online, using Amazon
Mechanical Turk. Participants were first presented with brief in-
formation about the study and an informed consent form. Next,
participants completed the main part of the study, in which they
answered 24 nutrition-related questions, divided into two blocks of
12 questions. They saw inductive explanations in one block and the
deductive explanations in the other. The order of explanations was
randomized across participants. Participants completed mid-study
and end-study questionnaires so that they would provide a separate
456
IUI’20, March 17–20, 2020, Cagliari, Italy Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman
(a)
(b)
Figure 1: The proxy task. Illustration of the simulated AI system participants interacted with: (a) is an example of an inductive
explanation with appropriate examples. (b) is an example of a deductive explanation with misrecognized ingredients, where
the simulated AI misrecognized apples and beets as avocados and bacon.
assessment for each of the two explanation types. They were also
asked to directly compare their experiences with the two simulated
AIs in a questionnaire at the end of the study.
3.1.3 Participants. We recruited 200 participants via Amazon Me-
chanical Turk (AMT). Participation was limited to adults in the US.
Of the total 200 participants, 183 were retained for final analyses,
while 17 were excluded based on their answers to two common-
sense questions included in the questionnaires (i.e., What color is
the sky? ). The study lasted 7 minutes on average. Each worker was
paid 2 USD.
3.1.4 Design and Analysis. This was a within-subjects design. The
within-subjects factor was explanation type — inductive or deduc-
tive.
We collected the following measures:
•Performance: Percentage of correct predictions of AI’s deci-
sions
•Appropriateness: Participants responded to the statement
“The AI based its decision on appropriate examples/ingredients. ”
with either 0=No or 1=Yes (after every question)
•Trust: Participants responded to the statement “I trust this
AI to assess the fat content of food. ” on a 5-point Likert scale
from 1=Strongly disagree to 5=Strongly agree (at the end of
each block)
•Mental demand: Participants answered the question “How
mentally demanding was understanding how this AI makes
decisions?” on a 5-point Likert scale from 1=Very low to
5=Very high (every four questions)•Comparison between the two explanation types: Participants
were asked at the end of the study to choose one AI over
another on trust, preference, and mental demand.
We used repeated measures ANOVA for within-subjects analyses
and the binomial test for the comparison questions.
3.2 Actual Decision-making Task
3.2.1 Task description. The actual decision-making task had a simi-
lar set up to the proxy task. Participants were shown the same series
of 24 images of different plates of food, but were asked their own
decision whether the percent fat content of nutrients on the plate
is higher than a certain percentage. As illustrated in Figure 2, each
image was accompanied by an answer recommended by a simulated
AI, and an explanation provided by that AI. We introduced two
more conditions to serve as baselines in the actual decision-making
task depicted in Figure 3.
There were three between-subjects conditions in this study: 1. the
no-AI baseline (where no recommendations or explanations were
provided), 2. the no-explanation baseline (where a recommendation
was provided by a simulated AI, but no explanation was given),
and 3. the main condition in which both recommendations and
explanations were provided. In this last condition, two within-
subjects sub-conditions were present: for one half of the study
participants saw inductive explanations and for the other they
saw deductive explanations. The order in which the two types of
explanations were seen was counterbalanced. In the no-AI baseline,
participants were not asked any of the questions relating to the
performance of the AI.
457
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems IUI’20, March 17–20, 2020, Cagliari, Italy
(a)
(b)
Figure 2: The actual task. Illustration of the simulated AI system participants interacted with. (a) is an example of incorrect
recommendations with inductive explanations. Contrasting the query image with the explanations reveals that the simulated
AI misrecognized churros with chocolate as sweet potato fries with BBQ sauce. (b) is an example of correct recommendation
with deductive explanations.
(a)
(b)
Figure 3: The baseline conditions. (a) no AI (b) no explana-
tions
The explanations in this task differed only slightly from the
explanations in the proxy task, because they indicated the AI’s
recommendation. Inductive explanations started with: “Here are
examples of plates that the AI categorizes as similar to the one
above and do (not) have X% or more fat.” followed by four examples
of images. Similarly, deductive explanations stated: “Here are in-
gredients the AI recognized as main nutrients which do (not) make
up X% or more fat on this plate:” followed by a list of ingredients.
3.2.2 Procedure. The procedure was the same as for the proxy task.
The study was conducted online, using the Amazon Mechanical
Turk. Participants were first presented with a brief information
about the study and an informed consent form. Next, participantscompleted the main part of the study, in which they answered 24
nutrition-related questions, divided into two blocks of 12 questions.
All participants also completed a questionnaire at the end of the
study, providing subjective assessments of the system they inter-
acted with. Participants who were presented with AI-generated
recommendations accompanied by explanations also completed a
mid-study questionnaire (so that they would provide separate as-
sessment for each of the two explanation types) and they were also
asked to directly compare their experiences with the two simulated
AIs at the end of the study.
3.2.3 Participants. We recruited 113 participants via Amazon Me-
chanical Turk (AMT). Participation was limited to adults in the US.
Of the total 113 participants, 102 were retained for final analyses,
while 11 were excluded based on their answers to two common-
sense questions included in the pre-activity and post-activity ques-
tionnaires (i.e., “What color is the sky?” ). The task lasted 10 minutes
on average. Each worker was paid 5 USD per task.
3.2.4 Design and Analysis. This was a mixed between- and within-
subjects design. As stated before, the three between-subjects condi-
tions were: 1. the no-AI baseline; 2. the no-explanation baseline, in
which the AI-generated recommendations were provided but no ex-
planations; 3. the main condition, in which both the AI-generated
recommendations and explanations were provided. The within-
subjects factor was explanation type (inductive or deductive) and
it was applied only for participants who were presented with AI-
generated recommendations with explanations.
458
IUI’20, March 17–20, 2020, Cagliari, Italy Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman
We collected the following measures:
•Performance: Percentage of correct answers (overall for each
AI, and specifically for questions when AI presented incor-
rect explanations)
•Understanding: Participants responded to the statement “I
understand how the AI made this recommendation. ” on a 5-
point Likert scale from 1=Strongly disagree to 5=Strongly
agree (after every question)
•Trust: Participants responded to the statement “I trust this
AI to assess the fat content of food. ” on a 5-point Likert scale
from 1=Strongly disagree to 5=Strongly agree (every four
questions)
•Helpfulness: Participants responded to the statement “This
AI helped me assess the percent fat content.” on a 5-point
Likert scale from 1=Strongly disagree to 5=Strongly agree
(at the end of each block)
•Comparison between the two explanation types: Participants
were asked at the end of the study to choose one AI over
another on trust, preference, understanding and helpfulness.
We used analysis of variance (ANOVA) for between-subjects
analyses and repeated measures ANOVA for within-subjects analy-
ses. We used the binomial test for the comparison questions.
4 RESULTS
4.1 Proxy Task Results
The explanation type had a significant effect on participants’ trust
and preference in the system. Participants trusted the AI more
when presented with inductive explanations ( M=3.55), rather
than deductive explanations ( M=3.40,F1,182=5.37,p=.02).
Asked to compare the two AIs, most of the participants stated
they trusted more the inductive AI (58% ,p=.04). When asked the
hypothetical question: “If you were asked to evaluate fat content of
plates of food, which AI would you prefer to interact with more?”,
again most of the participants (62%) chose the inductive AI over
the deductive AI ( p=.001).
The inductive AI was also rated significantly higher ( M=0.83)
than the deductive AI ( M=0.79) in terms of the appropriateness
of examples (ingredients for the deductive condition) on which the
AI based its decision ( F(1,182)=13.68,p=0.0003). When the AI
presented incorrect examples/ingredients, there was no significant
difference among the inductive ( M=0.47) and deductive ( M=0.50)
conditions (F(1,182)=1.02,p=.31,n.s.).
We observed no significant difference in overall performance
when participants were presented with inductive ( M=0.64) or
deductive explanations ( M=0.64,F(1,182)=0.0009 ,n.s.). When
either AI presented incorrect explanations, although the average
performance dropped for both inductive ( M=0.40) and deductive
(M=0.41) conditions, there was also no significant difference
among them (F(1,182)=.03,n.s.).
In terms of mental demand, there was a significant effect of the
explanation type. Participants rated the deductive AI ( M=2.94)
as more mentally demanding than the inductive AI ( M=2.79,
F(1,182)=7.75,p=.0006). The effect was noticed also when
they were asked: “Which AI required more thinking while choosing
which decision it would make?”, with 61%of participants choosing
deductive over inductive (p=.005).4.2 Actual Decision-making Task Results
18 participants were randomized into the no-AI condition, 19 into
the AI with no explanation condition, and 65 were presented with
AI recommendations supported by explanations.
We observed a significant main effect of the presence of explana-
tions on participants’ trust in the AI’s ability to assess the fat content
of food. Participants who saw either kind of explanation, trusted the
AI more ( M=3.56) than those who received AI recommendations,
but no explanations ( M=3.17,F1,483=11.28,p=.0008). Fur-
ther, there was a significant main effect of the explanation type on
participants’ trust: participants trusted the AI when they received
deductive explanations more ( M=3.68) than when they received
inductive explanations ( M=3.44,F1,64=5.96,p=.01). When
asked which of the two AIs they trusted more, most participants
(65%) said that they trusted the AI that provided deductive expla-
nations more than the one that provided inductive explanations
(p=.02).
Participants also found the AI significantly more helpful when
explanations were present ( M=3.78) than when no explanations
were offered ( M=3.26,F1,147=4.88,p=.03). Further, partici-
pants reported that they found deductive explanations more helpful
(M=3.92) than inductive ones ( M=3.65) and this difference was
marginally significant ( F1,64=3.66,p=.06). When asked which of
the two AIs they found more helpful, most participants (68%) chose
the AI that provided deductive explanations ( p=.006).
Participants also reported that they understood how the AI made
its recommendations better when explanations were present ( M=
3.84) than when no explanations were provided ( M=3.67,F1,2014=
6.89p=.009). There was no difference in the perceived level of
understanding between the two explanation types ( F1,64=0.44,p=
.51).
Asked about their overall preference, most participants (63%)
preferred the AI that provided deductive explanations over the AI
that provided inductive explanations ( p=.05).
In terms of actual performance on the task, participants who
received AI recommendations (with or without explanations) pro-
vided a significantly larger fraction of accurate answers ( M=0.72)
than those who did not receive AI recommendations ( M=0.46,
F1,2446=118 .07,p< .0001). Explanations further improved over-
all performance: participants who saw explanations of AI recom-
mendations had a significantly higher proportion of correct an-
swers ( M=0.74) than participants who did not receive explana-
tions of AI recommendations ( M=0.68,F1,2014=5.10,p=.02)
(depicted in Figure 4a). There was no significant difference be-
tween the two explanation types in terms of overall performance
(F1,64=0.44,n.s.). However, we observed a significant interac-
tion between explanation type and the correctness of AI recom-
mendations ( F2,2013=15.03p< .0001). When the AI made cor-
rect recommendations, participants performed similarly when they
saw inductive ( M=.78) and deductive ( M=.81) explanations
(F1,64=1.13,n.s.). When the AI made incorrect recommendations,
however, participants were significantly more accurate when they
saw inductive ( M=0.63) than deductive ( M=0.48) explanation
(F1,64=7.02,p=.01) (depicted in Figure 4b).
To ensure the results of our studies were not random, we repli-
cated both experiments with almost identical setup and obtained
459
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems IUI’20, March 17–20, 2020, Cagliari, Italy
performance
0.440.460.480.500.520.540.560.580.600.620.640.660.680.700.720.740.76
no AI no explanations with explanationsMean
(a)
performance
0.450.500.550.600.650.700.750.80
non-erroneous erroneouscondition
deductive
inductive
(b)
Figure 4: Performance in the actual decision-making task. (a) depicts the mean of performance among no-AI, no-Explanations
and with-Explanations (overall) conditions. (b) depicts the mean of performance among inductive and deductive conditions,
when the AI recommendation is correct and erroneous. Error bars indicate one standard error.
the same main results (in terms of significance) reported in this
section.
5 QUALITATIVE STUDY
Through the qualitative study, we explored the user reasoning and
sought to gain insight into the discrepancy between subjective
measures and performance. We asked participants to think aloud
during an in-person study in order to understand how and why
people perceive AI the way they do, in addition to what factors go
into making decisions when assisted by an AI.
5.1 Task
The same task design was used in this study as in the actual decision-
making task, except that all participants were presented with the
main condition (where both recommendations and explanations
were provided). As in the actual decision-making task, each partici-
pant saw both inductive and deductive explanations.
5.2 Procedure
Upon arriving to the lab, participants were presented with an in-
formed consent form, including agreeing to being screen- and audio-
recorded, and instructions on the task. Afterwards, the steps in this
study were similar to those in the actual decision-making task, ex-
cept that we added the think-aloud method [ 11]: as participants
completed the task, they were asked to verbalize their thought
process as they made each decision. At the end of the task, there
was a semi-structured interview, during which participants briefly
discussed how they believed the two AIs were making their recom-
mendations and why they did or did not trust them. Participants
also discussed if and why they preferred one AI over the other.5.3 Participants
We recruited 11 participants via community-wide emailing lists (8
female, 3 male, age range 23–29, M = 24.86, SD = 2.11). Participants
were primarily graduate students with backgrounds from design,
biomedical engineering, and education. Participants had varying
levels of experience with AI and machine learning, ranging from
0–5 years of experience.
5.4 Design and Analysis
We transcribed the think-aloud comments and the post-task inter-
views. Transcripts were coded and analyzed for patterns using an
inductive approach [ 39]. We focused on comments about (1) how
the AI made its recommendations; (2) trust in the AI; (3) erroneous
recommendations; (4) why people preferred one explanation type
over the other. From a careful reading of the transcripts, we discuss
some of the themes and trends that emerged from the data.
5.5 Results
Preference of one explanation type over another. Eight out
of the 11 participants preferred the inductive explanations. Par-
ticipants who preferred inductive explanations perceived the four
images as data. One participant stated that “Because [the AI] showed
similar pictures, I knew that it had data backing it up” (P3). On the
other hand, participants who preferred deductive explanations per-
ceived the listing of ingredients to be reliable, and that “if the AI
recognized that it’s steak, then I would think, Oh the AI knows more
about steak fat than I do, so I’m going to trust that since it identified
the object correctly. ” (P6).
In our observations, we found that the way participants used the
explanations was different depending on the explanation type. With
inductive explanations, one participant often first made their own
460
IUI’20, March 17–20, 2020, Cagliari, Italy Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman
Figure 5: Subjective evaluations in terms of trust and prefer-
ence of the two AIs. Red and blue depict the percent of par-
ticipants that chose inductive and deductive AI, respectively.
(a) proxy task (b) actual decision-making task.
judgement before looking at the recommendation, and then used
the recommendation to confirm their own judgement. In a cake
example, one participant said, “So I feel it probably does have more
than 30% because it’s cake, and that’s cream cheese. But these are all
similar to that, and the AI also says that it does have more than 30% fat,
so I agree” (P2). With deductive explanations, participants evaluated
the explanations and recommendation more before making any
decision. In the same cake example, a different participant said,
“There are [the AI recognizes] nuts, cream cheese, and cake. That
seems to make sense. Nuts are high in fat, so is dairy, so I agree with
that. ” (P6).
Cognitive Demand. At the end of the study, participants were
asked which AI was easier to understand. Ten out of 11 partici-
pants felt the inductive explanations were easier to understand
than the deductive explanations. Several participants stated that
the deductive explanations forced them to think more, and that
generally they spent more time making a decision with deductiveexplanations. One participant said, for example, “I feel like with this
one I have to think a bit more and and rely on my own experiences
with food to see or understand to gauge what’s fatty. ” (P2).
Errors and Over-reliance. Nine out of 11 participants claimed
to trust the inductive explanations more. We intentionally intro-
duced erroneous recommendations because we expected partic-
ipants to utilize them to calibrate their mental model of the AI.
When participants understood the error and believed the error was
reasonable for an AI to make, they expressed less distrust in subse-
quent questions. However, when participants perceived the error
to be inconsistent with other errors, their trust in subsequent rec-
ommendations was hurt much more. For example, one participant
stated, “I think the AI makes the recommendation based on shape
and color. But in some other dessert examples, it was able to identify
the dessert as a dessert. So I wasn’t sure why it was so difficult to
understand this particular item” (P5).
We found that there was also some observable correlation be-
tween explanation type and trust. Many participants claimed it
was easier to identify errors from the inductive explanations, yet
agreed with erroneous recommendations from inductive explana-
tions more. In some of those instances, participants either did not
realize the main food image was different from the other four or felt
the main food image was similar enough though not exact. Lastly,
one participant stated the inductive explanations were easier to
understand because “you can visually see exactly why it would come
to its decision, ”, but for deductive explanations “you can see what
it’s detecting but not why” (P8), and yet this participant also stated
that the deductive explanations seemed more trustworthy.
Impact of the Think-Aloud method on participant behav-
ior.In this study, we asked participants to perform the actual
decision-making task and we expected to observe similar results to
those obtained in the previous experiment when using the actual
tasks. Yet, in this study, 8 out of the 11 participants preferred the in-
ductive explanations and 10 out of 11 participants felt the inductive
explanations were easier to understand than the deductive expla-
nations. These results are comparable to the results we obtained in
the previous experiment when we used the proxy task rather than
the actual task.
We believe that the use of the think-aloud method may have
impacted participants’ behavior in this study. Specifically, because
participants were instructed to verbalize their thoughts, they were
more likely to engage in analytical thinking when considering
the AI recommendations and explanations than they were in the
previous experiment with the actual tasks, where their focus was
primarily on making decisions.
It is possible that while the think-aloud method is part of stan-
dard research practice for evaluating interfaces, it is itself a form
of cognitive forcing intervention [ 7], which impacts how people
perform on cognitively-demanding tasks such as interacting with
an explainable AI system on decision-making tasks. The act of talk-
ing about the explanations led participants to devote more of their
attention and cognition to the explanations, and thus made them
behave more similarly to participants in working with the proxy
task rather than those working with the actual task.
461
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems IUI’20, March 17–20, 2020, Cagliari, Italy
6 DISCUSSION
In this study, we investigated two hypotheses regarding the evalua-
tion of AI-powered explainable systems:
•H1:Results of widely accepted proxy tasks, where the user
is asked to explicitly engage with the explanations, may not
predict the results of realistic settings where the user’s focus
is on the actual decision-making task.
•H2: Subjective measures, such as self-reported trust and
preference with respect to different explanation designs, may
not predict the ultimate human+AI performance.
We examined these hypotheses in the context of a nutrition-related
decision-making task, by designing two distinct evaluation tasks
and two distinct explanation designs. The first task was a proxy
task, where the users had to simulate the AI’s decision by exam-
ining the explanations. The second task was the more realistic,
actual decision-making task, where the user had to make their own
decisions about the nutritional content of meals assisted by AI-
generated recommendations and explanations. Each of the tasks
had two parts, where users interacted with substantially different
explanation styles—inductive and deductive.
In the experiment with the proxy task, participants preferred and
trusted the AI that used inductive explanations significantly more.
They also reported that the AI that used inductive explanations
based its decision on more accurate examples on average than the
AI that used deductive explanations. When asked “If you were asked
to evaluate fat content of plates of food, which AI would you prefer to
interact with more?” ), the majority of participants chose the AI that
provided inductive explanations.
In contrast with the proxy task experiment, in the experiment
with the actual decision-making task, participants rated the AI
with deductive explanations as their preferred AI, and viewed it as
more trustworthy and more helpful compared to the AI that used
inductive explanations.
The contrast in terms of performance measures was less pro-
nounced. When attempting proxy tasks, participants demonstrated
nearly identical accuracy regardless of explanation type. However,
when attempting actual decision-making tasks and the AI provided
an incorrect recommendation, participants ignored that incorrect
recommendation and provided the correct answer significantly
more often when they had access to inductive, not deductive, ex-
planations for the AI’s recommendation.
These contradictory results produced by the two experiments
indicate that results of evaluations that use proxy tasks may not
correspond to results on actual tasks, thus supporting H1. This may
be because in the proxy task the users cannot complete the task
without engaging analytically with the explanations. Whereas, in
the actual decision-making task, the user’s primary goal is to make
the most accurate decisions about the nutritional content of meals;
she chooses whether and how deeply she engages with the AI’s
recommendations and explanations.
This finding has implications for the explainable AI community,
as there is a current trend to use proxy tasks to evaluate user mental
models of the AI-powered systems, with the implicit assumption
that the results will translate to the realistic settings where users
make decisions about an actual task while assisted by an AI.We tested H2on the actual decision-making task. The results
show that participants preferred, trusted and found the AI with
deductive explanations more helpful than the AI that used inductive
explanations. Yet, they performed significantly better with the AI
that used inductive explanations when the AI made erroneous
recommendations. Therefore, H2is also supported. This finding
suggests that the design decisions for explainable interfaces should
not be made by relying solely on user experience and subjective
measures. Subjective measures of trust and preference are, of course,
valuable and informative, but they should be used to complement
rather than replace performance measures.
Our research demonstrated that results from studies that use
proxy tasks may not predict results from studies that use realistic
tasks. Our results also demonstrated that user preference may not
predict their performance. However, we recognize that evaluating
novel AI advances through human subjects experiments that in-
volve realistic tasks is expensive in terms of time and resources, and
may negatively impact the pace of innovation in the field. Therefore,
future research needs to uncover why these differences exist so that
we can develop low burden evaluation techniques that correctly
predict the outcomes of deploying a system in a realistic setting.
We believe that the reason why explainable AI systems are sensi-
tive to the difference between proxy task and actual task evaluation
designs is that different AI explanation strategies require different
kinds and amounts of cognition from the users (like our inductive
and deductive explanations). However, people are reluctant to exert
cognitive effort [ 24,25] unless they are motivated or forced to do
so. They also make substantially different decisions depending on
whether they choose to exert cognitive effort or not [ 12,37]. In ac-
tual decision-making situations, people often choose notto engage
in effortful analytical thinking, even in high-stakes situations like
medical diagnosis [ 31]. Meanwhile, proxy tasks force participants
to explicitly pay attention to the behavior of the AI and the explana-
tions produced. Thus, results observed when participants interact
with proxy tasks do not accurately predict people’s behavior in
many realistic settings. In our study, participants who interacted
with the proxy task feltthat the deductive explanations required
significantly more thinking than the inductive explanations. There-
fore, in the proxy task where the participants were obliged to exert
cognitive effort to evaluate the explanations, they said they pre-
ferred and trusted the less cognitively demanding explanations
more, the inductive explanations. In contrast, in the actual task the
participants could complete the task even without engaging with
the explanations. Thus, we suspect that in the deductive condition
participants perceived the explanations as too mentally demanding,
and chose to over-rely on the AI’s recommendation, just to avoid
cognitive effort of examining those explanations. They also might
have perceived the AI that provided deductive explanations as more
competent because it required more thinking.
One implication of our analysis is that the effectiveness of ex-
plainable AI systems can be substantially impacted by the design
of the interaction (rather than just the algorithms or explanations).
For example, a recent study showed that a simple cognitive forcing
strategy (having participants make their own preliminary decision
before being shown the AI’s decision) resulted in much higher ac-
curacy of the final decisions made by human+AI teams than any
strategy that did not involve cognitive forcing [17].
462
IUI’20, March 17–20, 2020, Cagliari, Italy Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman
Inadvertently, we uncovered an additional potential pitfall for
evaluating explainable AI systems. As the results of our qualitative
study demonstrated, the use of the think-aloud method—a standard
technique for evaluating interactive systems—can also substantially
impact how participants allocate their mental effort. Because par-
ticipants were asked to think aloud, we suspect that they exerted
additional cognitive effort to engage with the explanations and
analyze their reasoning behind their decisions.
Together, these results indicate that cognitive effort is an impor-
tant aspect of explanation design and its evaluation. Explanations
high in cognitive demand might be ignored by the users while sim-
ple explanations might not convey the appropriate amount of evi-
dence that is needed to make informed decisions. At the same time,
traditional methods of probing users’ minds while using explainable
interfaces should also be re-evaluated. By taking into account the
cognitive effort and cognitive processes that are employed during
the evaluation of the explanations, we might generate explainable
interfaces that optimize the performance of the sociotechnical (hu-
man+AI) system as a whole. Such interfaces would instill trust, and
make the user aware of the system’s errors.
7 CONCLUSION
To achieve the aspiration of human+AI teams that complement one-
another and perform better than either the human or the AI alone,
researchers need to be cautious about their pragmatic decisions.
In this study, through online experiments and an in-person study,
we showed how several assumptions researchers make about the
evaluation of the explainable AI systems for decision-making tasks
could lead to misleading results.
First, choosing proxy tasks for the evaluation of explainable
AI systems shifts the user’s focus toward the AI, so the obtained
results might not correspond to results of the user completing the
actual decision-making task while assisted by the AI. In fact, our
results indicate that users trust and prefer one explanation design
(i.e. inductive) more in the proxy task, while they trust and prefer
the other explanation design (i.e. deductive) more in the actual
decision-making task.
Second, the subjective evaluation of explainable systems with
measures such as trust and preference may not correspond to the
ultimate user performance with the system. We found that people
trusted and preferred the AI with deductive explanations more, but
recognized AI errors better with the inductive explanations.
Lastly, our results suggest that think-aloud studies may not con-
vey how people make decisions with explainable systems in realistic
settings. The results from the think-aloud in-person study, which
used the actual task design, aligned more with the results we ob-
tained in the proxy task.
These findings suggest that to draw correct conclusions about
their experiments, explainable AI researchers should be wary of
the explainable systems’ evaluation pitfalls and design their evalua-
tion accordingly. Particularly, the correct and holistic evaluation of
explainable AI interfaces as sociotechnical systems is of paramount
importance, as they are increasingly being deployed in critical
decision-making domains with grave repercussions.
Acknowledgements. We would like to thank Tianyi Zhang and
Isaac Lage for helpful feedback.REFERENCES
[1]Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira
Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, and
others. 2019. Guidelines for human-AI interaction. In Proceedings of the 2019 CHI
Conference on Human Factors in Computing Systems. ACM, 3.
[2]Kenneth C. Arnold, Krysta Chaunce, and Krzysztof Z. Gajos. 2020. Predictive
Text Encourages Predictable Writing. In Proceedings of the 25th International
Conference on Intelligent User Interfaces (IUI ’20). ACM, New York, NY, USA.
[3]Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S Lasecki, Daniel S Weld, and
Eric Horvitz. 2019. Beyond Accuracy: The Role of Mental Models in Human-AI
Team Performance. In Proceedings of the AAAI Conference on Human Computation
and Crowdsourcing, Vol. 7. 2–11.
[4]Jennifer S Blumenthal-Barby and Heather Krieger. 2015. Cognitive biases and
heuristics in medical decision making: a critical review using a systematic search
strategy. Medical Decision Making 35, 4 (2015), 539–557.
[5]Carrie J Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov,
Martin Wattenberg, Fernanda Viegas, Greg S Corrado, Martin C Stumpe, and
others. 2019. Human-centered tools for coping with imperfect algorithms during
medical decision-making. In Proceedings of the 2019 CHI Conference on Human
Factors in Computing Systems. ACM, 4.
[6]Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L Boyd-Graber, and David M
Blei. 2009. Reading tea leaves: How humans interpret topic models. In Advances
in neural information processing systems. 288–296.
[7]Pat Croskerry. 2003. Cognitive forcing strategies in clinical decisionmaking.
Annals of emergency medicine 41, 1 (2003), 110–120.
[8]Louis Deslauriers, Logan S McCarty, Kelly Miller, Kristina Callaghan, and Greg
Kestin. 2019. Measuring actual learning versus feeling of learning in response to
being actively engaged in the classroom. Proceedings of the National Academy of
Sciences (2019), 201821936.
[9]Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of inter-
pretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
[10] Mary T Dzindolet, Scott A Peterson, Regina A Pomranky, Linda G Pierce, and
Hall P Beck. 2003. The role of trust in automation reliance. International journal
of human-computer studies 58, 6 (2003), 697–718.
[11] K Anders Ericsson and Herbert A Simon. 1984. Protocol analysis: Verbal reports
as data. the MIT Press.
[12] Ellen C Garbarino and Julie A Edell. 1997. Cognitive effort, affect, and choice.
Journal of consumer research 24, 2 (1997), 147–158.
[13] Francisco Javier Chiyah Garcia, David A Robb, Xingkun Liu, Atanas Laskov, Pedro
Patron, and Helen Hastie. 2018. Explainable autonomy: A study of explanation
styles for building clear mental models. In Proceedings of the 11th International
Conference on Natural Language Generation. 99–108.
[14] Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and
Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of
machine learning. In 2018 IEEE 5th International Conference on data science and
advanced analytics (DSAA). IEEE, 80–89.
[15] George Anthony Gorry and Michael S Scott Morton. 1971. A framework for
management information systems. (1971).
[16] Ben Green and Yiling Chen. 2019a. Disparate interactions: An algorithm-in-the-
loop analysis of fairness in risk assessments. In Proceedings of the Conference on
Fairness, Accountability, and Transparency. ACM, 90–99.
[17] Ben Green and Yiling Chen. 2019b. The principles and limits of algorithm-in-the-
loop decision making. Proceedings of the ACM on Human-Computer Interaction 3,
CSCW (2019), 1–24.
[18] Renate Häuslschmid, Max von Buelow, Bastian Pfleging, and Andreas Butz. 2017.
Supportingtrust in autonomous driving. In Proceedings of the 22nd international
conference on intelligent user interfaces. ACM, 319–329.
[19] Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics
for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608
(2018).
[20] Mary E Johnston, Karl B Langton, R Brian Haynes, and Alix Mathieu. 1994. Effects
of computer-based clinical decision support systems on clinician performance
and patient outcome: a critical appraisal of research. Annals of internal medicine
120, 2 (1994), 135–142.
[21] Ece Kamar. 2016. Directions in Hybrid Intelligence: Complementing AI Systems
with Human Intelligence.. In IJCAI. 4070–4073.
[22] Ece Kamar, Severin Hacker, and Eric Horvitz. 2012. Combining human and
machine intelligence in large-scale crowdsourcing. In Proceedings of the 11th
International Conference on Autonomous Agents and Multiagent Systems-Volume
1. International Foundation for Autonomous Agents and Multiagent Systems,
467–474.
[23] Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil
Mullainathan. 2017. Human Decisions and Machine Predictions\*. The Quarterly
Journal of Economics 133, 1 (08 2017), 237–293. DOI: http://dx.doi.org/10.1093/
qje/qjx032
[24] Wouter Kool and Matthew Botvinick. 2018. Mental labour. Nature human
behaviour 2, 12 (2018), 899–908.
463
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems IUI’20, March 17–20, 2020, Cagliari, Italy
[25] Wouter Kool, Joseph T McGuire, Zev B Rosen, and Matthew M Botvinick. 2010.
Decision making and the avoidance of cognitive demand. Journal of Experimental
Psychology: General 139, 4 (2010), 665.
[26] Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015.
Principles of explanatory debugging to personalize interactive machine learning.
InProceedings of the 20th international conference on intelligent user interfaces.
ACM, 126–137.
[27] Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel J
Gershman, and Finale Doshi-Velez. 2019. Human Evaluation of Models Built for
Interpretability. In Proceedings of the AAAI Conference on Human Computation
and Crowdsourcing, Vol. 7. 59–67.
[28] Vivian Lai and Chenhao Tan. 2019. On human predictions with explanations and
predictions of machine learning models: A case study on deception detection.
InProceedings of the Conference on Fairness, Accountability, and Transparency.
29–38.
[29] Himabindu Lakkaraju, Stephen H Bach, and Jure Leskovec. 2016. Interpretable
decision sets: A joint framework for description and prediction. In Proceedings of
the 22nd ACM SIGKDD international conference on knowledge discovery and data
mining. ACM, 1675–1684.
[30] Himabindu Lakkaraju and Osbert Bastani. 2019. "How do I fool you?": Ma-
nipulating User Trust via Misleading Black Box Explanations. arXiv preprint
arXiv:1911.06473 (2019).
[31] Kathryn Ann Lambe, Gary O’Reilly, Brendan D. Kelly, and Sarah Curristan.
2016. Dual-process cognitive interventions to enhance diagnostic reasoning:
A systematic review. BMJ Quality and Safety 25, 10 (2016), 808–820. DOI:
http://dx.doi.org/10.1136/bmjqs-2015-004417
[32] John D Lee and Katrina A See. 2004. Trust in automation: Designing for appro-
priate reliance. Human factors 46, 1 (2004), 50–80.
[33] Bonnie M Muir. 1987. Trust between humans and machines, and the design
of decision aids. International journal of man-machine studies 27, 5-6 (1987),
527–539.
[34] Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wort-
man Vaughan, and Hanna Wallach. 2018. Manipulating and measuring model
interpretability. arXiv preprint arXiv:1802.07810 (2018).
[35] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should I
trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd
ACM SIGKDD international conference on knowledge discovery and data mining.
ACM, 1135–1144.[36] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedan-
tam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from
deep networks via gradient-based localization. In Proceedings of the IEEE Interna-
tional Conference on Computer Vision. 618–626.
[37] Anuj K Shah and Daniel M Oppenheimer. 2008. Heuristics made easy: An effort-
reduction framework. Psychological bulletin 134, 2 (2008), 207.
[38] Jonathan Sherbino, Kulamakan Kulasegaram, Elizabeth Howey, and Geoffrey
Norman. 2014. Ineffectiveness of cognitive forcing strategies to reduce biases in
diagnostic reasoning: A controlled trial. Canadian Journal of Emergency Medicine
16, 1 (2014), 34–40. DOI:http://dx.doi.org/10.2310/8000.2013.130860
[39] David R. Thomas. 2006. A General Inductive Approach for Analyzing Qualitative
Evaluation Data. American Journal of Evaluation 27, 2 (2006), 237–246. DOI:
http://dx.doi.org/10.1177/1098214005283748
[40] Amos Tversky and Daniel Kahneman. 1974. Judgment under uncertainty: Heuris-
tics and biases. science 185, 4157 (1974), 1124–1131.
[41] Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing
Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Con-
ference on Human Factors in Computing Systems. ACM, 601.
[42] Yingxu Wang. 2007. The theoretical framework of cognitive informatics. Interna-
tional Journal of Cognitive Informatics and Natural Intelligence (IJCINI) 1, 1 (2007),
1–27.
[43] Yingxu Wang, Ying Wang, Shushma Patel, and Dilip Patel. 2006. A layered
reference model of the brain (LRMB). IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews) 36, 2 (2006), 124–133.
[44] Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, and Elisa-
beth André. 2019. Do you trust me?: Increasing User-Trust by Integrating Virtual
Agents in Explainable AI Interaction Design. In Proceedings of the 19th ACM
International Conference on Intelligent Virtual Agents. ACM, 7–9.
[45] Robert Andrew Wilson and Frank C Keil. 2001. The MIT encyclopedia of the
cognitive sciences.
[46] Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding
the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of
the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 279.
[47] John Zeleznikow. 2004. Building intelligent legal decision support systems: Past
practice and future challenges. In Applied Intelligent Systems. Springer, 201–254.
[48] Bolei Zhou, Yiyou Sun, David Bau, and Antonio Torralba. 2018. Interpretable
Basis Decomposition for Visual Explanation. In ECCV. 119–134.
464 |
87d4a2d2-9a89-4cbc-9402-9898701c5cbc | StampyAI/alignment-research-dataset/arbital | Arbital | Coherent extrapolated volition (alignment target)
# Introduction
"Coherent extrapolated volition" (CEV) is [Eliezer Yudkowsky](https://arbital.com/p/2)'s proposed thing-to-do with an extremely [advanced AGI](https://arbital.com/p/2c), if you're extremely confident of your ability to [align](https://arbital.com/p/5s) it on complicated targets.
Roughly, a CEV-based superintelligence would do what currently existing humans would want\* the AI to do, *if counterfactually:*
1. We knew everything the AI knew;
2. We could think as fast as the AI and consider all the arguments;
3. We knew ourselves perfectly and had better self-control or self-modification ability;
...*to whatever extent* most existing humans, thus extrapolated, would predictably want\* the same things. (For example, in the limit of extrapolation, nearly all humans might want\* not to be turned into [paperclips](https://arbital.com/p/10h), but might not agree\* on the best pizza toppings. See below.)
CEV is meant to be the *literally optimal* or *ideal* or *normative* thing to do with an [autonomous superintelligence](https://arbital.com/p/1g3), *if* you trust your ability to [perfectly align](https://arbital.com/p/41k) a superintelligence on a very complicated target. (See below.)
CEV is rather complicated and meta and hence *not* intended as something you'd do with the first AI you ever tried to build. CEV might be something that everyone inside a project agreed was an acceptable mutual target for their *second* AI. (The first AI should probably be a [Task AGI](https://arbital.com/p/6w).)
For the corresponding metaethical theory see [https://arbital.com/p/313](https://arbital.com/p/313).
# Concept
%%knows-requisite([https://arbital.com/p/313](https://arbital.com/p/313)):
See "[https://arbital.com/p/313](https://arbital.com/p/313)".
%%
%%!knows-requisite([https://arbital.com/p/313](https://arbital.com/p/313)):
[Extrapolated volition](https://arbital.com/p/313) is the metaethical theory that when we ask "What is right?", then insofar as we're asking something meaningful, we're asking "What would a counterfactual idealized version of myself want\* if it knew all the facts, had considered all the arguments, and had perfect self-knowledge and self-control?" (As a [metaethical theory](https://arbital.com/p/313), this would make "What is right?" a mixed logical and empirical question, a function over possible states of the world.)
A very simple example of extrapolated volition might be to consider somebody who asks you to bring them orange juice from the refrigerator. You open the refrigerator and see no orange juice, but there's lemonade. You imagine that your friend would want you to bring them lemonade if they knew everything you knew about the refrigerator, so you bring them lemonade instead. On an abstract level, we can say that you "extrapolated" your friend's "volition", in other words, you took your model of their mind and decision process, or your model of their "volition", and you imagined a counterfactual version of their mind that had better information about the contents of your refrigerator, thereby "extrapolating" this volition.
Having better information isn't the only way that a decision process can be extrapolated; we can also, for example, imagine that a mind has more time in which to consider moral arguments, or better knowledge of itself. Maybe you currently want revenge on the Capulet family, but if somebody had a chance to sit down with you and have a long talk about how revenge affects civilizations in the long run, you could be talked out of that. Maybe you're currently convinced that you advocate for green shoes to be outlawed out of the goodness of your heart, but if you could actually see a printout of all of your own emotions at work, you'd see there was a lot of bitterness directed at people who wear green shoes, and this would change your mind about your decision.
In Yudkowsky's version of extrapolated volition considered on an individual level, the three core directions of extrapolation are:
- Increased knowledge - having more veridical knowledge of declarative facts and expected outcomes.
- Increased consideration of arguments - being able to consider more possible arguments and assess their validity.
- Increased reflectivity - greater knowledge about the self, and to some degree, greater self-control (though this raises further questions about which parts of the self normatively get to control which other parts).
%%
# Motivation
Different people initially react differently to the question "Where should we point a superintelligence?" or "What should an aligned superintelligence do?" - not just different beliefs about what's good, but different frames of mind about how to ask the question.
Some common reactions:
1. "Different people want different things! There's no way you can give everyone what they want. Even if you pick some way of combining things that people want, *you'll* be the one saying how to combine it. Someone else might think they should just get the whole world for themselves. Therefore, in the end *you're* deciding what the AI will do, and any claim to some sort of higher justice or normativity is nothing but sophistry."
2. "What we should do with an AI is obvious - it should optimize liberal democratic values. That already takes into account everyone's interests in a fair way. The real threat is if bad people get their hands on an AGI and build an AGI that doesn't optimize liberal democratic values."
3. "Imagine the ancient Greeks telling a superintelligence what to do. They'd have told it to optimize for glorious deaths in battle. Programming any other set of inflexible goals into a superintelligence seems equally stupid; it has to be able to change and grow."
4. "What if we tell the superintelligence what to do and it's the wrong thing? What if we're basically confused about what's right? Shouldn't we let the superintelligence figure that out on its own, with its assumed superior intelligence?"
An initial response to each of these frames might be:
1. "Okay, but suppose you're building a superintelligence and you're trying *not to be a jerk about it.* If you say, 'Whatever I do originates in myself, and therefore is equally selfish, so I might as well declare myself God-Emperor of the Universe' then you're being a jerk. Is there anything you could do instead which would be less like being a jerk? What's the *least* jerky thing you could do?"
2. "What if you would, after some further discussion, want to tweak your definition of 'liberal democratic values' just a little? What if it's *predictable* that you would do that? Would you really want to be stuck with your off-the-cuff definition a million years later?"
3. "Okay, so what should the Ancient Greeks have done if they did have to program an AI? How could they not have doomed future generations? Suppose the Ancient Greeks are clever enough to have noticed that sometimes people change their minds about things and to realize that they might not be right about everything. How can they use the cleverness of the AGI in a constructively specified, computable fashion that gets them out of this hole? You can't just tell the AGI to compute what's 'right', you need to put an actual computable question in there, not a word."
4. "You asked, what if we're basically confused about what's right - well, in that case, what does the word 'right' even mean? If you don't know what's right, and you don't know how to compute what's right, then what are we even talking about? Do you have any ground on which to say that an AGI which only asks 'Which outcome leads to the greatest number of paperclips?' isn't computing rightness? If you don't think a paperclip maximizer is computing rightness, then you must know something about the rightness-question which excludes that possibility - so let's talk about how to program that rightness-question into an AGI."
CEV's advocates claim that all of these lines of discussion eventually end up converging on the idea of coherent extrapolated volition. For example:
1. Asking what everyone would want\* if they knew what the AI knew, and doing what they'd all predictably agree on, is just about the least jerky thing you can do. If you tell the AI to give everyone a volcano lair because you think volcano lairs are neat, you're not being selfish, but you're being a jerk to everyone who doesn't want a volcano lair. If you have the AI just do what people actually say, they'll end up hurting themselves with dumb wishes and you'd be a jerk. If you only extrapolate your friends and have the AI do what only you'd want, you're being jerks to everyone else.
2. Yes, liberal democratic values are good; so is apple pie. Apple pie is *a* good thing but it's not the *only* good thing. William Frankena's list of ends-in-themselves included "Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment" and then 25 more items, and the list certainly isn't complete. The only way you're going to get a complete list is by analyzing human minds; and even then, if our descendants would predictably want something else a million years later, we ought to take that into account too.
3. Every improvement is a change, but not every change is an improvement. Just letting a superintelligence change at random doesn't encapsulate moral progress. Saying that change toward more liberal democratic values is progress, presumes that we already know the destination or answer. We can't even just ask the AGI to *predict* what civilizations would think a thousand years later, since (a) the AI itself impacts this and (b) if the AI did nothing, maybe in a thousand years everyone would have accidentally blissed themselves out while trying to modify their own brains. If we want to do better than the hypothetical ancient Greeks, we need to define *a sufficiently abstract and meta criterion* that describes *valid* directions of progress - such as changes in moral beliefs associated with learning new facts, for example; or moral change that would predictably occur if we considered a larger set of arguments; or moral change that would predictably occur if we understood ourselves better.
4. This one is a long story: Metaethics deals with the question of what sort of entity 'rightness' is exactly - tries to reconcile this strange ineffable 'rightness' business with a universe made out of particle fields. Even though it seems like human beings wanting to murder people wouldn't *make* murder right, there's also nowhere in the stars or mountains where we can actually find it written that murder is wrong. At the end of a rather long discussion, we decide that for any given person speaking at a given point in time, 'rightness' is a logical constant which, although not counterfactually dependent on the state of the person's brain, must be analytically identified with the extrapolated volition of that brain; and we show that (only) this stance gives consistent answers to all the standard questions in metaethics. (This discussion takes a while, on the order of explaining how deterministic laws of physics don't show that you have unfree will.)
(To do: Write dialogues from each of these four entrance points.)
# Situating CEV in contemporary metaethics
See the corresponding section in "[https://arbital.com/p/313](https://arbital.com/p/313)".
# Scary design challenges
There are several reasons why CEV is *way* too challenging to be a good target for any project's first try at building machine intelligence:
1. A CEV agent would be intended to carry out an [autonomous](https://arbital.com/p/1g3) open-ended mission. This implies all the usual reasons we expect an autonomous AI to be harder to make safe than a [Task AGI](https://arbital.com/p/6w).
2. CEV is a weird goal. It involves recursion.
3. Even the terms in CEV, like "know more" or "extrapolate a human", seem complicated and [value-laden](https://arbital.com/p/36h). You might have to build a high-level [Do What I Know I Mean agent](https://arbital.com/p/2s1), and then tell it to do CEV. [Do What I Know I Mean](https://arbital.com/p/2s1) is complicated enough that you'd need to build an AI that can learn DWIKIM, so that DWIKIM can be taught rather than formally specified. So we're looking at something like CEV, running on top of DWIKIM, running on top of a goal-learning system, at least until the first time the CEV agent rewrites itself.
Doing this correctly the *very first time* we build a smarter-than-human intelligence seems *improbable.* The only way this would make a good first target is if the CEV concept is formally simpler than it currently seems, *and* timelines to AGI are unusually long and permit a great deal of advance work on safety.
If AGI is 20 years out (or less), it seems wiser to think in terms of a [Task AGI](https://arbital.com/p/6w) performing some relatively simple [pivotal act](https://arbital.com/p/6y). The role of CEV is of answering the question, "What can you all agree in advance that you'll try to do *next,* after you've executed your Task AGI and gotten out from under the shadow of immediate doom?"
# What if CEV fails to cohere?
A frequently asked question is "What if extrapolating human volitions produces incoherent answers?"
According to the original motivation for CEV, if this happens in *some* places, a Friendly AI ought to ignore those places. If it happens *everywhere,* you probably picked a silly way to construe an extrapolated volition and you ought to rethink it. %note: Albeit in practice, you would not want an AI project to take a dozen tries at defining CEV. This would indicate something extremely wrong about the method being used to generate suggested answers. Whatever final attempt passed would probably be the first answer [all of whose remaining flaws were hidden](https://arbital.com/p/blackzoning), rather than an answer with all flaws eliminated.%
That is:
- If your CEV algorithm finds that "People coherently want to not be eaten by [paperclip maximizers](https://arbital.com/p/10h), but end up with a broad spectrum of individual and collective possibilities for which pizza toppings they prefer", we would normatively want a Friendly AI to prevent people from being eaten by paperclip maximizers but not mess around with which pizza toppings people end up eating in the Future.
- If your CEV algorithm claims that there's no coherent sense in which "A lot of people would want to not be eaten by Clippy and would still want\* this even if they knew more stuff" then this is a suspicious and unexpected result. Perhaps you have picked a silly way to construe somebody's volition.
The original motivation for CEV can also be viewed from the perspective of "What is it to help someone?" and "How can one help a large group of people?", where the intent behind the question is to build an AI that renders 'help' as we really [intend](https://arbital.com/p/6h) that. The elements of CEV can be seen as caveats to the naive notion of "Help is giving people whatever they ask you for!" in which somebody asks you to bring them orange juice but the orange juice in the refrigerator is poisonous (and they're not *trying* to poison themselves).
What about helping a group of people? If two people ask for juice and you can only bring one kind of juice, you should bring a non-poisonous kind of juice they'd both like, to the extent any such juice exists. If no such juice exists, find a kind of juice that one of them is meh about and that the other one likes, and flip a coin or something to decide who wins. You are then being around as helpful as it is possible to be.
Can there be *no way* to help a large group of people? This seems implausible. You could at least give the starving ones pizza with a kind of pizza topping they currently like. To the extent your philosophy claims "Oh noes even *that* is not helping because it's not perfectly coherent," you have picked the wrong construal of 'helping'.
It could be that, if we find that every reasonable-sounding construal of extrapolated volition fails to cohere, we must arrive at some entirely other notion of 'helping'. But then this new form of helping also shouldn't involve bringing people poisonous orange juice that they don't know is poisoned, because that still intuitively seems unhelpful.
## Helping people with incoherent preferences
What if somebody believes themselves to [prefer onions to pineapple on their pizza, prefer pineapple to mushrooms, and prefer mushrooms to onions](https://arbital.com/p/7hh)? In the sense that, offered any two slices from this set, they would pick according to the given ordering?
(This isn't an unrealistic example. Numerous experiments in behavioral economics demonstrate exactly this sort of circular preference. For instance, you can arrange 3 items such that each pair of them brings a different salient quality into focus for comparison.)
One may worry that we couldn't 'coherently extrapolate the volition' of somebody with these pizza preferences, since these local choices obviously aren't consistent with any coherent utility function. But how could we *help* somebody with a pizza preference like this?
Well, appealing to the intuitive notion of *helping:*
- We could give them whatever kind of pizza they'd pick if they had to pick among all three simultaneously.
- We could figure out how happy they'd be eating each type of pizza, in terms of emotional intensity as measured in neurotransmitters; and offer them the slice of pizza that they'll most enjoy.
- We could let them pick their own damn pizza toppings and concern ourselves mainly with making sure the pizza isn't poisonous, since the person definitely prefers non-poisoned pizza.
- We could, given sufficient brainpower on our end, figure out what this person would ask us to do for them in this case after that person had learned about the concept of a preference reversal and been told about their own circular preferences. If this varies wildly depending on exactly how we explain the concept of a preference reversal, we could refer back to one of the previous three answers instead.
Conversely, these alternatives seem *less* helpful:
- Refuse to have anything to do with that person since their current preferences don't form a coherent utility function.
- Emit "ERROR ERROR" sounds like a Hollywood AI that's just found out about the Epimenides Paradox.
- Give them pizza with your own favorite topping, green peppers, even though they'd prefer any of the 3 other toppings to those.
- Give them pizza with the topping that would taste best to them, pepperoni, despite their being vegetarians.
Advocates of CEV claim that if you blank the complexities of 'extrapolated volition' out of your mind; and ask how you could reasonably help people as best as possible if you were trying not be a jerk; and then try to figure out how to semiformalize whatever mental procedure you just followed to arrive at your answer for how to help people; then you will eventually end up at CEV again.
# Role of meta-ideals in promoting early agreement
A primary purpose of CEV is to represent a relatively simple meta-level ideal that people can agree upon, even where they might disagree on the object level. By a hopefully analogous example, two honest scientists might disagree on the correct mass of an electron, but agree that the experimental method is a good way to resolve the answer.
Imagine Millikan believes an electron's mass is 9.1e-28 grams, and Nannikan believes the correct electron mass is 9.1e-34 grams. Millikan might be very worried about Nannikan's proposal to program an AI to believe the electron mass is 9.1e-34 grams; Nannikan doesn't like Millikan's proposal to program in 9.1-e28; and both of them would be unhappy with a compromise mass of 9.1e-31 grams. They might still agree on programming an AI with some analogue of probability theory and a simplicity prior, and letting a superintelligence come to the conclusions implied by Bayes and Occam, because the two can agree on an effectively computable question even though they think the question has different answers. Of course, this is easier to agree on when the AI hasn't yet produced an answer, or if the AI doesn't tell you the answer.
It's not guaranteed that every human embodies the same implicit moral questions, indeed this seems unlikely, which means that Alice and Bob might still expect their extrapolated volitions to disagree about things. Even so, while the outputs are still abstract and not-yet-computed, Alice doesn't have much of a place to stand on which to appeal to Carol, Dennis, and Evelyn by saying, "But as a matter of morality and justice, you should have the AI implement *my* extrapolated volition, not Bob's!" To appeal to Carol, Dennis, and Evelyn about this, you'd need them to believe that Alice's EV was more likely to agree with their EVs than Bob's was - and at that point, why not come together on the obvious Schelling point of extrapolating *everyone's* EVs?
Thus, one of the primary purposes of CEV (selling points, design goals) is that it's something that Alice, Bob, and Carol can agree *now* that Dennis and Evelyn should do with an AI that will be developed *later;* we can try to set up commitment mechanisms now, or check-and-balance mechanisms now, to ensure that Dennis and Evelyn are still working on CEV later.
## Role of 'coherence' in reducing expected unresolvable disagreements
A CEV is not necessarily a majority vote. A lot of people with an extrapolated weak preference\* might be counterbalanced by a few people with a strong extrapolated preference\* in the opposite direction. Nick Bostrom's "[parliamentary model](http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html)" for resolving uncertainty between incommensurable ethical theories, permits a subtheory very concerned about a decision to spend a large amount of its limited influence on influencing that particular decision.
This means that, e.g., a vegan or animal-rights activist should not need to expect that they must seize control of a CEV algorithm in order for the result of CEV to protect animals. It doesn't seem like most of humanity would be deriving huge amounts of utility from hurting animals in a post-superintelligence scenario, so even a small part of the population that strongly opposes\* this scenario should be decisive in preventing it.
# Moral hazard vs. debugging
One of the points of the CEV proposal is to have minimal [moral hazard](https://arbital.com/p/2sb) (aka, not tempting the programmers to take over the world or the future); but this may be compromised if CEV's results don't go literally unchecked.
Part of the purpose of CEV is to stand as an answer to the question, "If the ancient Greeks had been the ones to invent superintelligence, what could they have done that would not, from our later perspective, irretrievably warp the future? If the ancient Greeks had programmed in their own values directly, they would have programmed in a glorious death in combat. Now let us consider that perhaps we too are not so wise." We can imagine the ancient Greeks writing a CEV mechanism, peeking at the result of this CEV mechanism before implementing it, and being horrified by the lack of glorious-deaths-in-combat in the future and value system thus revealed.
We can also imagine that the Greeks, trying to cut down on [moral hazard](https://arbital.com/p/2sb), virtuously refuse to peek at the output; but it turns out that their attempt to implement CEV has some unforeseen behavior when actually run by a superintelligence, and so their world is turned into paperclips.
This is a safety-vs.-moral-hazard tradeoff between (a) the benefit of being able to look at CEV outputs in order to better-train the system or just verify that nothing went horribly wrong; and (b) the moral hazard that comes from the temptation to override the output, thus defeating the point of having a CEV mechanism in the first place.
There's also a potential safety hazard just with looking at the internals of a CEV algorithm; the simulated future could contain all sorts of directly mind-hacking cognitive hazards.
Rather than giving up entirely and embracing maximum moral hazard, one possible approach to this issue might be to have some single human that is supposed to peek at the output and provide a 1 or 0 (proceed or stop) judgment to the mechanism, without any other information flow being allowed to the programmers if the human outputs 0. (For example, the volunteer might be in a room with explosives that go off if 0 is output.)
# "Selfish bastards" problem
Suppose that Fred is funding Grace to work on a CEV-based superintelligence; and Evelyn has decided not to oppose this project. The resulting CEV is meant to extrapolate the volitions of Alice, Bob, Carol, Dennis, Evelyn, Fred, and Grace with equal weight. (If you're reading this, you're more than usually likely to be one of Evelyn, Fred, or Grace.)
Evelyn and Fred and Grace might worry: "What if a supermajority of humanity consists of 'selfish\* bastards', such that their extrapolated volitions would cheerfully vote\* for a world in which it was legal to own artificial sapient beings as slaves so long as they personally happened to be in the slaveowning class; and we, Evelyn and Fred and Grace, just happen to be in the minority that extremely doesn't want nor want\* the future to be like that?"
That is: What if humanity's extrapolated volitions diverge in such a way that from the standpoint of *our* volitions - since, if you're reading this, you're unusually likely to be one of Evelyn or Fred or Grace - 90% of extrapolated humanity would choose\* something such that *we* would not approve of it, and our volitions would not approve\* of it, *even after* taking into account that we don't want to be jerks about it and that we don't think we were born with any unusual or exceptional right to determine the fate of humanity.
That is, let the scenario be as follows:
> 90% of the people (but not we who are collectively sponsoring the AI) are selfish bastards at the core, such that *any reasonable* extrapolation process (it's not just that we picked a broken one) would lead to them endorsing a world in which they themselves had rights, but it was okay to create artificial people and hurt them. Furthermore, they would derive enough utility from being personal God-Emperors that this would override our minority objection even in a parliamentary model.
We can see this hypothetical outcome as potentially undermining every sort of reason that we, who happen to be in a position of control to prevent that outcome, should voluntarily relinquish that control to the remaining 90% of humanity:
- We can't be prioritizing being fair to everyone including the other 90% of humanity, because what about being fair to the artificial people who are being hurt?
- We can't be worrying that the other 90% of humanity would withdraw their support from the project, or worrying about betraying the project's supporters, because by hypothesis they weren't supporting it or even permitting it.
- We can't be agreeing to defer to a righter and more intelligent process to resolve our dispute, because by hypothesis the CEV made up of 90% selfish\* bastards is not, from our own perspective, ideally righter.
- We can't rely on a parliamentary model of coherence to prevent what a minority sees as disaster, because by hypothesis the other 90% is deriving enough utility from collectively declaring themselves God-Emperors to trump even a strong minority countervote.
Rather than giving up entirely and taking over the world, *or* exposing ourselves to moral hazard by peeking at the results, one possible approach to this issue might be to run a three-stage process.
This process involves some internal references, so the detailed explanation needs to follow a shorter summary explanation.
In summary:
- Extrapolate everyone's CEV.
- Extrapolate the CEV of the contributors only, and let it give (only) an up-down vote on Everyone's CEV.
- If the result is thumbs-up, run Everyone's CEV.
- Otherwise, extrapolate everyone's CEV, but kicking out all the parts that would act unilaterally and without any concern for others if they were in positions of unchecked power.
- Have the Contributor CEV give an up/down answer on the Fallback CEV.
- If the result is thumbs-up, run the Fallback CEV.
- Otherwise fail.
In detail:
- First, extrapolate the everyone-on-Earth CEV as though it were not being checked.
- If any hypothetical extrapolated person worries about being checked, delete that concern and extrapolate them as though they didn't have it. This is necessary to prevent the check itself from having a [UDT](https://arbital.com/p/updateless) influence on the extrapolation and the actual future.
- Next, extrapolate the CEV of everyone who contributed to the project, weighted by their contribution (possibly based on some mix of "how much was actually done" versus "how much was rationally expected to be accomplished" versus "the fraction of what could've been done versus what was actually done"). Allow this other extrapolation an up-or-down vote - *not* any kind of detailed correction - on whether to let the everyone-on-Earth CEV to go through unmodified.
- Remove from the extrapolation of the Contributor-CEV any strategic considerations having to do with the Fallback-CEV or post-Fail redevelopment being a *better* alternative; we want to extract a judgment about "satisficing" in some sense, whether the Everyone-CEV is in some non-relative sense too horrible to be allowed.
- If the Everyone-CEV passes the Contributor-CEV check, run it.
- Otherwise, re-extrapolate a Fallback-CEV that starts with all existing humans as a base, but *discards* from the extrapolation all extrapolated decision processes that, if they were in a superior strategic position or a position of unilateral power, would *not* bother to extrapolate others' volitions or care about their welfare.
- Again, remove all extrapolated *strategic* considerations about passing the coming check.
- Check the Fallback-CEV against the Contributor-CEV for an up-down vote. If it passes, run it.
- Otherwise Fail (AI shuts down safely, we rethink what to do next or implement an agreed-on fallback course past this point).
The particular fallback of "kick out from the extrapolation any weighted portions of extrapolated decision processes that would act unilaterally and without caring for others, given unchecked power" is meant to have a property of poetic justice, or rendering objections to it self-defeating: If it's okay to act unilaterally, then why can't we unilaterally kick out the unilateral parts? This is meant to be the 'simplest' or most 'elegant' way of kicking out a part of the CEV whose internal reasoning directly opposes the whole reason we ran CEV in the first place, but imposing the minimum possible filter beyond that.
Thus if Alice (who by hypothesis is not in any way a contributor) says, "But I demand you altruistically include the extrapolation of me that would unilaterally act against you if it had power!" then we reply, "We'll try that, but if it turns out to be a sufficiently bad idea, there's no coherent interpersonal grounds on which you can rebuke us for taking the fallback option instead."
Similarly in regards to the Fail option at the end, to anyone who says, "Fairness demands that you run Fallback CEV even if you wouldn't like\* it!" we can reply, "Our own power may not be used against us; if we'd regret ever having built the thing, fairness doesn't oblige us to run it."
# Why base CEV on "existing humans" and not some other class of extrapolees?
One frequently asked question about the implementation details of CEV is either:
- Why formulate CEV such that it is run on "all existing humans" and not "all existing and past humans" or "all mammals" or "all sapient life as it probably exists everywhere in the measure-weighted infinite multiverse"?
- Why not restrict the extrapolation base to "only people who contributed to the AI project"?
In particular, it's been asked why restrictive answers to Question 1 don't [also imply](https://arbital.com/p/3tc) the more restrictive answer to Question 2.
## Why not include mammals?
We'll start by considering some replies to the question, "Why not include all mammals into CEV's extrapolation base?"
- Because you could be wrong about mammals being objects of significant ethical value, such that we *should* on an object level respect their welfare. The extrapolation process will catch the error if you'd predictably change your mind about that. Including mammals into the extrapolation base for CEV potentially sets in stone what could well be an error, the sort of thing we'd predictably change our minds about later. If you're normatively *right* that we should all care about mammals and even try to extrapolate their volitions into a judgment of Earth's destiny, if that's what almost all of us would predictably decide after thinking about it for a while, then that's what our EVs will decide\* to do on our behalf; and if they don't decide\* to do that, it wasn't right which undermines your argument for doing it unconditionally.
- Because even if we ought to care about mammals' welfare qua welfare, extrapolated animals might have really damn weird preferences that you'd regret including into the CEV. (E.g., after human volitions are outvoted by the volitions of other animals, the current base of existing animals' extrapolated volitions choose\* a world in which they are uplifted to God-Emperors and rule over suffering other animals.)
- Because maybe not everyone on Earth cares\* about animals even if your EV would in fact care\* about them, and to avoid a slap-fight over who gets to rule the world, we're going to settle this by e.g. a parliamentary-style model in which you get to expend your share of Earth's destiny-determination on protecting animals.
To expand on this last consideration, we can reply: "Even if you would regard it as more just to have the *right* animal-protecting outcome baked into the future immediately, so that your EV didn't need to expend some of its voting strength on assuring it, not everyone else might regard that as just. From our perspective as programmers we have no particular reason to listen to you rather than Alice. We're not arguing about whether animals will be protected if a minority vegan-type subpopulation strongly want\* that and the rest of humanity doesn't care\*. We're arguing about whether, if *you* want\* that but a majority doesn't, your EV should justly need to expend some negotiating strength in order to make sure animals are protected. This seems pretty reasonable to us as programmers from our standpoint of wanting to be fair, not be jerks, and not start any slap-fights over world domination."
This third reply is particularly important because taken in isolation, the first two replies of "You could be wrong about that being a good idea" and "Even if you care about their welfare, maybe you wouldn't like their EVs" could [equally apply to argue](https://arbital.com/p/3tc) that contributors to the CEV project ought to extrapolate only their own volitions and not the rest of humanity:
- We could be wrong about it being a good idea, by our own lights, to extrapolate the volitions of everyone else; including this into the CEV project bakes this consideration into stone; if we were right about running an Everyone CEV, if we would predictably arrive at that conclusion after thinking about it for a while, our EVs could do that for us.
- Not extrapolating other people's volitions isn't the same as saying we shouldn't care. We could be right to care about the welfare of others, but there could be some spectacular horror built into their EVs.
The proposed way of addressing this was to run a composite CEV with a contributor-CEV check and a Fallback-CEV fallback. But then why not run an Animal-CEV with a Contributor-CEV check before trying the Everyone-CEV?
One answer would go back to the third reply above: Nonhuman mammals aren't sponsoring the CEV project, allowing it to pass, or potentially getting angry at people who want to take over the world with no seeming concern for fairness. So they aren't part of the Schelling Point for "everyone gets an extrapolated vote".
## Why not extrapolate all sapients?
Similarly if we ask: "Why not include all sapient beings that the SI suspects to exist everywhere in the measure-weighted multiverse?"
- Because large numbers of them might have EVs as alien as the EV of an Ichneumonidae wasp.
- Because our EVs can always do that if it's actually a good idea.
- Because they aren't here to protest and withdraw political support if we don't bake them into the extrapolation base immediately.
## Why not extrapolate deceased humans?
"Why not include all deceased human beings as well as all currently living humans?"
In this case, we can't then reply that they didn't contribute to the human project (e.g. I. J. Good). Their EVs are also less likely to be alien than in any other case considered above.
But again, we fall back on the third reply: "The people who are still alive" is a simple Schelling circle to draw that includes everyone in the current political process. To the extent it would be nice or fair to extrapolate Leo Szilard and include him, we can do that if a supermajority of EVs decide\* that this would be nice or just. To the extent we *don't* bake this decision into the model, Leo Szilard won't rise from the grave and rebuke us. This seems like reason enough to regard "The people who are still alive" as a simple and obvious extrapolation base.
## Why include people who are powerless?
"Why include very young children, uncontacted tribes who've never heard about AI, and retrievable cryonics patients (if any)? They can't, in their current state, vote for or against anything."
- A lot of the intuitive motivation for CEV is to not be a jerk, and ignoring the wishes of powerless living people seems intuitively a lot more jerkish than ignoring the wishes of powerless dead people.
- They'll actually be present in the future, so it seems like less of a jerk thing to do to extrapolate them and take their wishes into account in shaping that future, than to not extrapolate them.
- Their relatives might take offense otherwise.
- It keeps the Schelling boundary simple. |
ae96fc51-427f-43b7-a24a-d653e393cb11 | StampyAI/alignment-research-dataset/special_docs | Other | Two guarantees
When I imagine proving something about an AI, or making an inductive argument about [amplification](/iterated-distillation-and-amplification-157debfd1616), I think about two different guarantees:
1. The AI behaves well \*on average\* over some particular distribution of inputs. (The performance guarantee.)
2. The AI never does anything “actively malicious,” on \*any\* input.
(The control guarantee.)
Definitions
-----------
We can define “behaves well on average” by looking at the average impact of decisions on our utility function. [This post](/thoughts-on-reward-engineering-82b193ec03f6) addresses some subtleties with that definition, including inconsistent preferences, different scales of impact, and long-term consequences. Actually evaluating this quantity, so that it can be used in training, requires [amplification](/iterated-distillation-and-amplification-157debfd1616) and [informed oversight](/the-informed-oversight-problem-1b51b4f66b35).
My intuition about “actively malicious” is best captured by the (informal) idea of [incorrigible](/corrigibility-3039e668638) optimization: our AI should never be actively trying to undermine our understanding or control of the situation. This property seems “easy,” in the sense that it can be satisfied without any domain knowledge or expertise, which makes me optimistic that it is possible to [satisfy it for every input](/techniques-for-optimizing-worst-case-performance-39eafec74b99).
Rationale
=========
Rationale for the performance guarantee
---------------------------------------
The performance guarantee is what does the work; it’s the reason you’d want an AI instead of a pet rock.
If we construct the reward so it reflects the difference in utility between different actions, then the average case is actually a pretty good thing to optimize.
In reality the data distribution is constantly changing over time. So we want an [online guarantee](https://www.cs.huji.ac.il/~shais/papers/OLsurvey.pdf), maximizing the average case performance over a varying sequence, rather than a batch guarantee. Normally deep learning is designed for the batch setting under the iid assumption, and data distributions change slowly enough that this is acceptable. Overall, I don’t think this distinction is a huge deal.
Note that the “average case” guarantee still reflects the overseer’s desire for [corrigibility](/corrigibility-3039e668638). This is formally reflected when (a) the overseer evaluates the expected utility impact of a decision by marginalizing over all of the mistakes that the AI might make in the future, (b) the overseer evaluates individual decisions by assigning a large penalty for incorrigibly doing the wrong thing, but only a small penalty for behaving corrigibly but slightly inefficiently.
Rationale for control guarantee
-------------------------------
Unfortunately, it is only possible to get an average performance guarantee when the rewards are small relative to the total amount of data. If a few bad decisions can destroy all of the value from a lifetime of good decisions, then traditional ML can’t do well on average.
If an AI is \*trying\* to cause a catastrophe, it can do so very quickly even in apparently benign situations (by breaking the mechanisms that humans use to remain in control of the situation). The goal of the control guarantee is to prevent this kind of catastrophe.
Similarly, if an AI is trying to be difficult, it could intentionally undermine whatever testing regime we use (by failing selectively on inputs that don’t look like tests). The control guarantee should also rule out this kind of intentional sabotage.
Hopefully the control guarantee cuts off the worst catastrophes, making the performance guarantee meaningful.
Other catastrophes
------------------
Even if our AI satisfies the control guarantee, it can make other kinds of catastrophic mistakes (just as an [aligned human](/clarifying-ai-alignment-cec47cd69dd6) could make a catastrophic mistake). For example, the system might be in charge of running a nuclear defense system, and a single failure could lead to a nuclear war. Or an attacker might fool the AI into executing an instruction by impersonating the operator, which in turn open the system to further attack.
Neither the control nor performance guarantee directly address these problems. Instead, anyone who deploys an AI needs to be aware of the system’s limits, to test the system to see where it might fail, to design mechanisms with redundancy, to protect the system for attackers, and to avoid incorrectly assuming perfect performance. (The same measures we would take if delegating to a human who sometimes made mistakes.)
Corrigibility
-------------
The performance and control guarantees interact to create a system that is corrigible: the performance guarantee ensures the system is typically trying to give the human more effective understanding and control over the situation. The control guarantee ensures the system isn’t undermining those measures, for example by constructing a fake narrative for the human or illusory control, by leaving backdoors that can be exploited, etc.
The performance guarantee leaves open the possibility that the system will sometimes fail to inform or empower the human effectively. But as long as those are failures aren’t optimized to be unrecoverable, it seems the human can work around them by recognizing the shortcoming and having the AI optimize for the robustness of human control and understanding.
I think there is a meaningful analogy between this picture and this [post about monitoring and whistleblowing](https://sideways-view.com/2018/02/01/honest-organizations/). and I have a vague intuition that there is some important underlying dynamic that could be better understood.
Amplification
=============
Amplification and performance
-----------------------------
[Amplification](/policy-amplification-6a70cbee4f34) solves a task by (adaptively) breaking it into several subtasks, solving the subtasks, and combining the results.
If we are making an inductive argument about amplification, then the performance guarantee implies the expected \*average\* performance on subtasks is good, and requires that the expected performance on the overall task is good.
If some of the subtasks make “honest mistakes,” then I think this is a reasonable hope.
In order to make it work we definitely need to have an unequal weighting over subtasks, which corresponds to an unequal sampling of subtasks during training. For example, if we are trying to figure out if any of {A, B, C} are positive, we proceed by independently estimating A, B, C, and we conclude that B is positive, then the estimation of B was a more important subtask than the estimation of A or C.
Even with arbitrary weights on subtasks, preserving average performance might seem too optimistic. In the previous example, if we conclude that all of A, B, C are negative, then any one of the three errors could lead to an erroneous overall conclusion. So if our original error rate was 5%, then our new error rate will be 15%.
In cases like this, it’s wrong of think of “error rate” as the measure of performance that should be preserved inductively. Some questions are harder than others, and should be expected to have higher error rate — if any rational agent tries to figure out whether any one of N quantities is positive, without any clever trick for figuring it out, then they’ll have a harder time than if they were just trying to estimate whether a single quantity is positive. This makes it harder to talk precisely about the guarantees of amplification, but leaves me feeling relatively optimistic about eventually finding a working analysis.
Amplification and control
-------------------------
I find the above intuition plausible when errors are “honest mistakes.” If instead one of the subtasks is optimized to cause trouble, then I think it is very hard (and likely impossible) to correct the problem. So in order to have any hope of maintaining performance, we need to guarantee that none of the subtask solutions are actively malicious.
But in order to guarantee that a subtask isn’t actively malicious, we also need to assume that none of \*its\* subtasks are actively malicious. If we want to carry out the induction to a reasonable depth, this basically requires a worst-case guarantee. Otherwise even a single failure, in one leaf of an exponentially large tree, could cause trouble.
Unfortunately, a human probably does not satisfy the control guarantee, since they may behave very strangely on some small subset of possible inputs or with small probability. I originally described [security amplification](/universality-and-security-amplification-551b314a3bab) and [reliability amplification](/reliability-amplification-a96efa115687) to gradually eliminate the human control failures (as well as new failures that crop up, e.g. from memetic selection inside the amplification process).
Interestingly, some ways of achieving the control guarantee in the distillation step could also be applied even if the overseer \*doesn’t\* satisfy the control guarantee, and so could be used as a replacement for security/reliability amplification. This isn’t as crazy as it sounds, since the ability to look at the entire code of a learned model gives us a substantial advantage over dealing with an opaque overseer. It gives a further reason to prioritize [techniques for worst-case performance](/techniques-for-optimizing-worst-case-performance-39eafec74b99) (and particularly interpretability).
Conclusion
==========
We can’t guarantee an AI is aligned if we have only an average-case guarantee, \*or\* onlyaworst-case guarantee. So achieving both seems like the “minimum viable product” for alignment research.
My original intuition (in mid-2016) was that having two separate guarantees must be an ugly hack, and that the real goal should encapsulate both. That’s no longer so clear to me: I think these two properties interact surprisingly nicely, such that they may actually suffice to get good behavior even though it looks like a weird combination. At the same time, I think attempts to capture both are much less promising than I’d initially believed.
I still think we need more clarity about what we should be trying to prove. I think that having two separate guarantees, one for the worst case and one for the average case, is the current best guess and is the most promising starting point for further research. In the short term, my focus will be on understanding ways in which this structure is inadequate and on independently refining each of these two subgoals. |
9f439903-460a-498e-84c0-64a878f9d8b9 | trentmkelly/LessWrong-43k | LessWrong | Evaluating Moral Theories
I would like to use my first post to expand on a framework I introduced in the Welcome thread for evaluating moral theories, and to request your feedback.
This thesis rests on the fact that a moral theory is a tool for helping us make choices. Starting from this premise, I believe that a moral theory needs to meet three criteria for it to be acceptable:
a) Its comprising principles must be non-contradictory. I think this is pretty self evident: if a theory consists of a number of principles that contradict each other, there will be situations where the theory will suggest contradictory actions - hence failing its purpose as a tool to enable choice making.
b) Its comprising principles must be non-arbitrary as far as possible. What I mean by this is that the principles must be derived logically from facts on which everyone agrees. Otherwise, if a moral theory rests on an arbitrary and subjective principle, the theory's advocates will never be able to convince people who do not share that principle of their theory's validity.
c) If the principles of the moral theory are taken to their logical conclusion, they must not lead to a society that the theory's proponents themselves would consider dystopian.
Note that my premise (i.e. that a moral theory is supposed to help us make choices) necessitates that the theory is not vague. So saying that a utilitarian system, using some magical measurement of utility, is a good moral theory is pointless in my view.
However, I want to draw a distinction between morality at the social level and morality at the personal level. The former refers to a moral system whose proponents believe should apply to the whole world; the latter, to the principles by which people live their private lives. The three criteria I listed should only be used to evaluate morality at the social level: if you want to impose your principles over every single human, you'd better make sure they are non-contradictory, acceptable by everyone and won't mess up |
99475aa1-3641-4d44-a603-f642ee8a7a49 | trentmkelly/LessWrong-43k | LessWrong | Weekly LW Meetups
This summary was posted to LW Main on November 1st. The following week's summary is here.
Irregularly scheduled Less Wrong meetups are taking place in:
* Atlanta: November Meetup (First of Two): 03 November 2013 07:00PM
* Cologne (Köln): 10 November 2013 04:00PM
* Princeton NJ Meetup: 16 November 2013 02:00PM
* Urbana-Champaign: Thinking Fast and Slow Discussion: 03 November 2013 03:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Austin, TX: 02 November 2019 01:30PM
* Durham/RTLW HPMoR discussion, chapters 94-96: 02 November 2013 12:00PM
* Washington DC/VA Games meetup: 03 November 2013 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up.
Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your meetup!
If you check Less Wrong irregu |
2db1d771-15ff-4c3f-8467-36e0b39e298a | trentmkelly/LessWrong-43k | LessWrong | Model-Based Policy Analysis under Deep Uncertainty
This post is based on my introduction talk at EAGxBerlin 2022. It is intended for policy researchers who want to extend their tool kit with computational tools. I show how we can support decision-making with simulation models of socio-technical systems while embracing uncertainties in a systematic manner. The technical field of decision-making under deep uncertainty offers a wide range of methods to account for various parametric and structural uncertainties while identifying robust policies where we want to optimize for multiple objectives simultaneously for particularly vulnerable scenarios.
Summary
* Real-world political decision-making problems are complex, with disputed knowledge, differing problem perceptions, opposing stakeholders, and interactions between framing the problem and problem-solving.
* Modeling can help policy-makers to navigate these complexities.
* Traditional modeling is ill-suited for this purpose.
* Systems modeling is a better fit (e.g., agent-based models).
* Deep uncertainty is more common than one might think.
* Deep uncertainty makes expected-utility reasoning virtually useless.
* Decision-Making under Deep Uncertainty is a framework that can build upon systems modeling and overcome deep uncertainties.
* Explorative modeling > predictive modeling.
* Value diversity (aka multiple objectives) > single objectives.
* Focus on finding vulnerable scenarios and robust policy solutions.
* Good fit with the mitigation of GCRs, X-risks, and S-risks.
Complexity
Complexity science is an interdisciplinary field that seeks to understand complex systems and the emergent behaviors that arise from the interactions of their components. Complexity is often an obstacle to decision-making. So, we need to address it.
Ant Colonies
Ant colonies are a great example of how complex systems can emerge from simple individual behaviors. Ants follow very simplistic rules, such as depositing food, following pheromone trails, and communicating wit |
ac0d54bb-734b-4891-8018-ec065e875125 | trentmkelly/LessWrong-43k | LessWrong | Investment, Work, and Vision:
Who is responsible for creating value?
Previously in this series:
Value Created vs. Value Extracted
A Brief Recap
I define creating value as the process of producing something new, something the world did not have before. I define extracting value as monopolizing something that already exists in order to personally benefit from denying others access to it.
I operationalize the difference between the two with a metaphor of an engineer who builds a bridge connecting two towns.
This bridge makes trade easier, cheaper, and safer than trade or travel by boat. Thus the engineer has created value.
Some years later, a troll happens by and decides to live beneath the bridge. This troll, Rob, forces any who would cross the bridge to pay a toll to him (or be squished by a large club). Thus Rob has extracted value from the bridge. This is bad and he should feel bad.
Affably Evil is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
----------------------------------------
What Did The Engineer Really Do?
Now, I say that the engineer created this value, but what does that really mean?
What did the engineer add to the world, that wasn’t there before? And how much of that is the engineer responsible for? What was her contribution?
Not The Atoms
Whether or not this particular engineer happens to be aware of the laws of thermodynamics, she certainly didn’t create the atoms that make up the bridge. They came from the earth, and can trace a proud lineage all the way back to the big bang.
That being said, it’s not controversial to point out that the value we find in physical matter is almost entirely in the shape of that matter, the arrangement of the atoms, rather than the atoms themselves. Computer chips are no different than sand and dirt, atomically, save for the arrangement of the atoms.
And the atoms that make up the bridge might as well have been living in their mother’s basement and eating nachos all day for all the productive impa |
25c062cb-1d6b-49b9-9792-174b75b7a3c0 | trentmkelly/LessWrong-43k | LessWrong | Zero Agents and Plastic Men
[Note: this post is mainly anecdata and speculation, so don’t expect academic citations and regression models. My epistemic status on this is, accordingly, speculative. I’m sure that many people discussed similar themes, but I arrived at these conclusions independently. Also, since the anecdotes are personal, the names and identifying details of all people in this post have been changed. ]
> There's two kinds of people in this world, son. There's the ingroup and there's the outgroup.
>
> — Jacob Falkovich (@yashkaf) April 21, 2016
This post is about sociology. I never actually studied sociology.
The closest I got was a sociology book I once received as a birthday gift from Maya, my Israeli ex-girlfriend who majored in sociology. She told me the book was about patience. I read the first three pages: the book turned out to be about gift-giving. Coincidentally, the two words are spelled the same in Hebrew (המתנה). Maya admitted that she never actually read it, but it was recommended by her sociology professor. And besides, she hinted, I could use to learn about patience anyway. I wasn’t sure how to learn patience from a book about gifts, so I never opened it again.
As for Maya, after we broke up I introduced her to a very patient friend of mine. They recently got married after eight years of patient dating.
So what do I know of sociology? All I know comes mainly from three sources. The first is the video of Stanley Milgram’s experiments on authority and obedience. The second is Scott’s post about the outgroup. And the third is Scott’s post about the ingroup.
The ingroup post is titled “The Ideology Is not the Movement”. It explains that extant tribes of people rarely stay concerned for long with the official reason for the tribe’s formation. Whatever else Sunni and Shia Muslims are killing each other over in 2016, the choice of rightful caliph to succeed Muhammad in 632 AD ain’t it. “Gamergate” isn’t the movement of people who think that “Depression Quest” is |
ec0d13f8-1017-4d94-ae95-7422bd61e714 | trentmkelly/LessWrong-43k | LessWrong | List of technical AI safety exercises and projects
EDIT 3/17/2023: I've reorganized the doc and added some governance projects.
I intend to maintain a list at this doc. I'll paste the current state of the doc (as of January 19th, 2023) below. I encourage people to comment with suggestions.
* Levelling Up in AI Safety Research Engineering [Public] (LW)
* Highly recommended list of AI safety research engineering resources for people at various skill levels.
* AI Alignment Awards
* Alignment jams / hackathons from Apart Research
* Past / upcoming hackathons: LLM, interpretability 1, AI test, interpretability 2
* Projects on AI Safety Ideas: LLM, interpretability, AI test
* Resources: black-box investigator of language models, interpretability playground (LW), AI test
* Examples of past projects; interpretability winners
* How to run one as an in-person event at your school
* Neel Nanda: 200 Concrete Open Problems in Mechanistic Interpretability (doc and previous version)
* Project page from AGI Safety Fundamentals and their Open List of Project ideas
* AI Safety Ideas by Apart Research; EAF post
* Most Important Century writing prize (Superlinear page)
* Center for AI Safety
* Competitions like SafeBench
* Student ML Safety Research Stipend Opportunity – provides stipends for doing ML research.
* course.mlsafety.org projects CAIS is looking for someone to add details about these projects on course.mlsafety.org
* Distilling / summarizing / synthesizing / reviewing / explaining
* Forming your own views on AI safety (without stress!) – also see Neel's presentation slides and "Inside Views Resources" doc
* Answer some of the application questions from the winter 2022 SERI-MATS, such as Vivek Hebbar's problems
* 10 exercises from Akash in “Resources that (I think) new alignment researchers should know about”
* [T] Deception Demo Brainstorm has some ideas (message Thomas Larsen if these seem interesting)
* Upcoming 2023 Open Philanthropy AI Worldviews Contest
* Alignment research |
b6eb7970-fa35-4afd-8538-46b26a2fb1cc | StampyAI/alignment-research-dataset/lesswrong | LessWrong | [Fiction] Lena (MMAcevedo)
Wiki article about the first brain image of a human upload (2031). Subpar in some respects, but initially a compliant worker and free to copy thanks to court cases ruling that the biological original did not have a legal right to restrict its use. |
367e1f51-0466-48f0-8959-41d03dae7a78 | trentmkelly/LessWrong-43k | LessWrong | Which Anaesthetic To Choose?
In causal decision theory, the perspective/indexical aspect could lead to reflective inconsistency. This is generally regarded as a problem. Here I present a thought experiment to show why this view may require further review.
Two Anaesthetics
Suppose you are about to undergo a major operation. You can choose one of the following two anesthetics:
Drug A functions as follows:
1. It paralyzes your body and relaxes your muscles
2. It prevents long-term memory from forming for the duration of the operation
Drug B functions as follows:
1. It paralyzes your body and relaxes your muscles (same as A)
2. It renders you unconscious for the duration of the operation
Suppose the two drugs are equal in all other considerations such as safety, long-term side effects, etc. But Drug A is covered by your health insurance while Drug B requires you to pay 1 dollar out-of-pocket. What would you pick?
Reflective Inconsistency
Drug A would lead to the dreaded anesthesia awareness, that you will feel the excruciating pain and trauma of being operated on. To prevent it from happening, for the mere cost of 1 dollar, Drug B is the obvious choice.
Yet for moments after the operation, you would be wishing that you had chosen Drug A instead. The choice of drugs does not lead to any long-term difference, other than the fact that you would be 1 dollar richer if you had gone with Drug A. Furthermore, you know this before the operation.
If the later you dictate your current decision it would mean Drug A is the clear winner. (Consider this as a case of post-commitment, in contrast to pre-commitments where an earlier self takes away the decision at a later point, often appearing in decision-making problems such as Parfit's Hitchhiker) Hence the inconsistency.
Decision Theories
CDT has a naturally built-in indexical element. Considering the fact that the decision is based on a pre-operation perspective, it will choose Drug B over A.
However, for non-indexical decision theories, |
af5c3dd5-df88-417d-8ef0-4d4662b28120 | trentmkelly/LessWrong-43k | LessWrong | [Linkpost] Otu and Lai: a story
Link: http://vansochill.blog/2019/09/15/otu-and-lai-a-story/
This story was originally for my blog, and it was going to build up to a follow-on post about morality/philosophy, but I think it's an important notion for rationalists too.
(NB: My blog has a gimmick, which is that I don't post anything with E's in it. As such, this story's wording is a bit unusual.)
Without ado:
> On a cool autumn day, two girls found a dark hollow. At 14 and 16, Otu and Lai had no grasp of what risks its winding corridors could hold. Lai, always an analyst, was curious what lay at its bottom, if it had a bottom at all. Otu, young and timid, saw no option but to tag along.
> It wasn’t long until that hollow’s mouth was a long-ago ghost of a thought. Lai’s hand ran along dank rock walls just to find a way to walk; sunlight had all but quit our protagonists.
> “I know this wall!” Lai would say now and again. “Now I can find us a way out!”
> On no occasion did this pan out, and following six dud calls, Lai didn’t try again.
> “Damn,” said Lai at last. “Okay, I couldn’t hold a full map of this hollow in my mind. But you know what? Turning right at all junctions should work to bring us back out. That’s a fact I know from math class, so it has to work.”
> But that was a no-go too; it wasn’t always turning right that would bring you out; it was always following a wall that would do it. Lai’s slip cost an hour, at which point both girls had to stop walking and sit down for a bit. Gloom sprung up, and spans of quick sobs took turns with spans of calm.
> Finally Otu lay down, admitting that touching a bit of dirt was worth not having an aching butt from sitting for so long. Might so much as a nap stand as too high a wish?
> It did, for Otu’s torso hit not ground but a plastic chassis.
> “Ouch! What was that?”
> Lai sat up. “What’s what?
> Otu’s hand found it, took it, hit it, hit it again, and ran along its plastic skin, trying to find out what it was. It was long-ish, a tubular for |
1deb1b6a-b31b-4ae3-bb4e-e9db5ecde765 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Outside the Laboratory
Today's post, Outside the Laboratory was originally published on 21 January 2007. A summary (taken from the LW wiki):
> Written regarding the proverb "Outside the laboratory, scientists are no wiser than anyone else." The case is made that if this proverb is in fact true, that's quite worrisome because it implies that scientists are blindly following scientific rituals without understanding why. In particular, it is argued that if a scientist is religious, they probably don't understand the foundations of science very well.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Some Claims Are Just Too Extraordinary, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
76cd0bff-9244-48c3-8c22-5881c8736c79 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | H-JEPA might be technically alignable in a modified form
This post is a collection of replies to Steven Byrnes' "[LeCun’s “A Path Towards Autonomous Machine Intelligence” has an unsolved technical alignment problem](https://www.lesswrong.com/posts/C5guLAx7ieQoowv3d/lecun-s-a-path-towards-autonomous-machine-intelligence-has-1)" which wind up into a whole separate post because **I suggest tweaks to the H-JEPA architecture (aka APTAMI) that** ***might*** **address the problems pointed out by Byrnes.**
In the end, I also suggest a reason to ponder on the H-JEPA architecture and its safety properties even if we don't expect it to work and/or don't actually plan to deploy it. In short, this reason is to simply train ourselves, as AI alignment and cognitive scientists, to understand and find problems with [AI-generated alignment science and proposals](https://www.lesswrong.com/tag/ai-assisted-ai-automated-alignment).
Below are quotes from Steven's post and my replies.
Recommended skimming strategy: read section headings and bolded sentences in the text.
The ambivalence between pro-sociality and instrumental thinking could be useful
-------------------------------------------------------------------------------
> Such an AI would feel “torn”, so to speak, when deciding whether to advance its own goals versus humans’.
>
>
This ambivalence could also be a necessary condition for useful and/or safe general intelligence. People themselves are torn between pro-social and selfish motives all the time. Totally controllable and subservient robot might turn out not to be a very useful or downright dangerous thing, e.g., in the hands of misaligned people.
This is somewhat related to the currently open question of [whether a "tool"/simulator AI is safer than an "agent" AI](https://www.lesswrong.com/posts/opE6L8jBTTNAyaDbB/a-multi-disciplinary-view-on-ai-safety-research#3_3__AI_self_control__tool__oracle__simulator__AI_vs__agent_AI) with a certain ethical ego or vice versa, all things considered.
The Altruism Controller
-----------------------
> We can *hope* that the prosocial drives will “win out” in the AI’s reckoning, but we don’t have any strong reason to expect that they *will* in fact win out.
>
>
Let’s say there is a submodule within the Trainable Critic module, or as a completely separate module, called Altruism Controller, where the statistics are kept (or learned, by an NN) about how much do pro-social components of intrinsic cost (or, ultimately, trainable cost) vs. “instrumental” components (power, resources, interestingness, etc.) contribute, depending on the context: the world state and the level of inference. Let’s say, on the very low levels of inference, such as precise locomotive planning and control, instrumental components of energy usually dominate, i.e., fine-grained motion planning is usually motivated by power economy and low-level instrumental expediency, such as opening a kitchen cabinet for resolving epistemic uncertainty — finding a knife, rather than “moving in a playful way that surrounding people find funny”. On the higher levels of planning and inference, the picture could be reversed: plans for a day are usually made such that they minimise energy coming from pro-social intrinsic cost components (people are happy at the end of the day), rather than instrumental (full battery power at the end of the day, more resources at the end of the day). This picture also depends on the context: when there are no people around (e.g., a robot on a human-free Von Neumann probe to a different star), plans for a day are obviously not usually driven by pro-sociality (but plans for a *century* still could).
Then, when this separate module detects an unusual situation (e.g., a totally new context), or that **a certain inferred plan at a certain level of the hierarchy is unusually strongly driven by instrumental components of intrinsic and trainable cost in the certain context, then such situation itself could be a trigger for out-of-order inference on the higher level of abstraction**, or existing into the Configurator mode immediately to re-configure the Actor NN so that it hopefully generates more “pro-social” plans if unexpectedly failed in the given context and configuration previously.
Now, this doesn’t provide an ironclad guarantee because Altruism Controller could also unexpectedly fail, or be tampered with by the robot itself.
However, if we consider self-tampering, all bets are off because the robot could tamper with everything, including the “immutable” intrinsic cost module. That’s a very different line of criticism of LeCun’s confidence that “we are fine and alignment is easy”, which I totally agree with as unsolved, but I believe this is not the main topic of Steven's post.
Regarding the possibility of a failure of the Altruism Controller, yes, H-JEPA is *not* *provably* safe, but nor is RLHF/language feedback, Conjecture's CoEm agenda, Bayesian model alignment, and all other "prosaic" approaches. Only [Highly Reliable Agent Designs](https://www.lesswrong.com/posts/hWtpqjYXAvFExmAsD/arguments-about-highly-reliable-agent-designs-as-a-useful) aim to be *provably* safe (MIRI style), but many people are sceptical that this concept even makes sense from philosophical and scientific points of view, as I discussed in "[A multi-disciplinary view on AI safety](https://www.lesswrong.com/s/s2ye5AAnGQnqG2bhK)". But this is a digression.
Ways to address the "adversarial" OOD generalisation problem: prioritise science and continuously align on the models
---------------------------------------------------------------------------------------------------------------------
> *Why is problem 2 an “adversarial” OOD problem?* Here’s a toy example. Imagine that the AI is deciding what to do, out of a very wide possibility space. For example, once we get AIs that can invent new technology, then the AI has access to actions that might wildly change the world compared to anything in history. Thus, if there are any anomalies where the critic judges a weird situation as unusually low-intrinsic-cost, then we’re in a situation where the AI’s brainstorming process is *actively seeking out such anomalies*.
>
> (From our human perspective, we would say “this plan is exploiting an anomalous edge-case in the critic”. Whereas from the AI’s perspective, it would say, “this plan is a clever awesome out-of-the-box way to solve every problem!!” You say tomato, I say to-*mah*-to.)
>
>
It’s amusing to note that you don’t need to invoke machine intelligence here: this is *exactly* what AGI developers/maximalists/accelerationists (aspiring or real) are thinking and doing, today. So, I think we can deem the problem unsolved *even in humans*.
The possible systemic responses (but not “solutions”) to this issue are
(1) **prioritisation of reasoning based on** ***predictive, scientific*** **models/theories of everything (including psychology, ethics, sociology, political science, etc.) rather than intuitive guesses**, and
(2) **continuous alignment of all intelligent agents** (and, more generally, [alignment subjects](https://www.lesswrong.com/posts/3BPuuNDavJ2drKvGK/scientism-vs-people?commentId=6KvNSoZov4rXbzAu6)) **on these models/theories**, and proactive management of disagreements and incompatibilities (e.g., letting agents choose to interact with other agents whose local version of ethics they share).
Both responses are agnostic to whether we talk about human-to-human or human-to-AI interaction. And, of course, both are easier said than done.
Ways to address the misgeneralisation problem: align on epistemology and design better system incentives
--------------------------------------------------------------------------------------------------------
> *Out-of-distribution generalization problem 1:* How does the Prosociality Score Model generalize from the supervised (human-labeled) examples to the AI’s future perceptions—which might be far outside that training distribution?
>
> *Why is problem 1 an “adversarial” OOD problem?* Here’s a toy example. The AI might notice that it finds it pleasing to watch movies of happy people—because it spuriously triggers the Prosociality Score Model. Then the AI might find itself wanting to make its own movies to watch. As the AI fiddles with the settings in iMovie, it might find that certain texture manipulations make the movie *really really* pleasing to watch on loop—because it “tricks” the Prosociality Score Model into giving anomalously high scores.
>
>
This is the problem of misaligned epistemology: in order for inferences to be “aligned”, humans’ and AI’s epistemological disciplines/theories/skills/methods should also be aligned.
My post “[`Goal alignment without alignment on epistemology, ethics, and science is futile`](https://www.lesswrong.com/posts/fqfAmAGFLKpsnjfJB/goal-alignment-without-alignment-on-epistemology-ethics-and)” was prompted by a very similar example given by Nate Soares, to which I responded:
> Concrete example: "happiness" in the post ”[Misgeneralization is a misnomer](https://www.lesswrong.com/posts/dkjwSLfvKwpaQSuWo/misgeneralization-as-a-misnomer)” sounds like a "predicted" future state of the world (where "all people are happy"), which implicitly leverages certain scientific theories (e.g., what does it mean for people to be happy), epistemology (how do we know that people are happy), and ethics: is the predicted plan of moving from the current state of the world, where not all people are happy, to the future state of the world where all people are happy, conforms with our ethical and moral theories? Does it matter how many people are happy? Does it matter whether other living being become unhappy in the course of this plan, and to what degree? Does it matter that AIs are happy or not? Wouldn't it be more ethical to "solve happiness" or "remove unhappiness" via human-AI merge, mind upload, or something else like that? And on and on.
>
> Thus, without aligning with AI on epistemology, rationality, ethics, and science, "asking" AIs to "make people happy" is just a gamble with infinitesimal chances of "winning".
>
>
Thus, in the H-JEPA framework, this could potentially be implemented as **a host of separate components in the Critic module, trained to evaluate whether the plans inferred by the AI exhibited the disciplines of foundational philosophy (philosophy of mathematics and philosophy of science), mathematics, epistemology, ethics, rationality, physics, communication, game theory, cognitive science, psychology/theory of mind, etc. that humans have.**
Now, again it’s much easier said than done. Perhaps the biggest problem is that humans themselves are not aligned on these disciplines. Then, the approach obviously sets an extremely high entrance bar for making an AGI, much more of a Manhattan R&D megaproject than bottom-up AGI research and tinkering that happens in practice (and megaprojects themselves have well-known failure modes). Also, I'm not sure it's possible to learn abstract methodological and scientific disciplines (assuming that we have a reference textbook and a body of knowledge for the discipline, and a set of scientific models) through effectively supervised, imitation-trained NN module in the Critic (as Steven suggested above for the Prosociality Score Model).
Apart from alignment on epistemology and ethics, **maybe one useful tactic could be reducing the inferential distance between human and AI intelligence, e.g., by giving AIs direct access to Neuralink data so that AI can infer more directly whether a human is “unhappy” from neuronal evidence**. This is also the path towards cyborgisation and in line with the idea that I have that long-term (100+ years), the only viable strategy for humanity is to merge with AI.
This is also an interesting topic from the perspective of **information asymmetry and game theory: presumably if we develop interpretability tools, we will have near-perfect visibility into AI’s cognitive states. But without neuronal data, AI won’t have visibility into ours. I’m not sure whether such an asymmetry is a good or bad thing.** But it could be a good thing, and giving AIs direct access to human cognitive states could be more dangerous than useful.
Finally, on this topic, it’s interesting to note that this “*Out-of-distribution generalization problem 1*” is also unsolved for humans right now: certain evolutionary and pro-social drives in humans are hijacked by out-of-distribution stuff such as (social) media, destructive ideologies and cults, porn, online dating, etc. All these things are brand new and thus “out of distribution” on the timescale of the evolution of humans and humans often don’t handle them well.
**There is probably no easy fix for hijacking on the level of an individual and thus the issue should be addressed on the higher-system level, with governance and deliberate societal/economic system design, incentive design, mechanism design, etc. Probably the same will apply to AIs: no matter how good their design, the higher-level system should also provide the right incentives.** I discussed this recently in [this comment](https://www.lesswrong.com/posts/bRtP7Mub3hXAoo4vQ/an-open-letter-to-seri-mats-program-organisers?commentId=TCn7J7AYv3FgqdFfe).
AI alignment scientists could work on the H-JEPA proposal just to prepare themselves for evaluating AI-generated science
------------------------------------------------------------------------------------------------------------------------
> For one thing, we don’t actually know for sure that this technical alignment problem is solvable at all, until we solve it. And if it’s not in fact solvable, *then we should not be working on this research program at all*.
>
>
This is an important point, indeed. **My attitude towards APTAMI is that it** ***could*** **work, but this depends on developing an enormous amount of novel science in cognitive science, epistemology, ethics, etc., which wouldn’t be possible without the help of an “**[**alignment MVP**](https://www.lesswrong.com/posts/fYf9JAwa6BYMt8GBj/link-a-minimal-viable-product-for-alignment)**”.** But that “alignment MVP” (e.g., a [CoEm](https://www.lesswrong.com/posts/ngEvKav9w57XrGQnb/cognitive-emulation-a-naive-ai-safety-proposal)) could tell us “Sorry guys, this looks like a technically unsolvable problem”, or fail to find a solution no matter how hard it tries. (This is a big risk factor in the “alignment MVP” plans such as those of OpenAI and Conjecture, but discussing this risk is out of scope here.)
So, in the context of the “alignment MVP” plans of AGI labs, I think we could and should think about the proposed research program, but not really with the purpose of actually developing this research program to maturity and deploying it “at full capacity” (unless the current LLM scaling paradigm will fail short of “AGI scientist” level, as LeCun predicts, and we will be forced to develop LeCun’s agenda to make the very “alignment MVP”). The purpose is, rather, to prepare ourselves as alignment scientists to find problems with whatever cognitive and alignment science “alignment MVP” will be generating (or assisting humans to generate), because ultimately it will be the human task to understand and verify that body of research and engineering designs. |
80f162df-a535-44f9-8705-6c1ca8712766 | trentmkelly/LessWrong-43k | LessWrong | My new paper: Concept learning for safe autonomous AI
Abstract: Sophisticated autonomous AI may need to base its behavior on fuzzy concepts that cannot be rigorously defined, such as well-being or rights. Obtaining desired AI behavior requires a way to accurately specify these concepts. We review some evidence suggesting that the human brain generates its concepts using a relatively limited set of rules and mechanisms. This suggests that it might be feasible to build AI systems that use similar criteria and mechanisms for generating their own concepts, and could thus learn similar concepts as humans do. We discuss this possibility, and also consider possible complications arising from the embodied nature of human thought, possible evolutionary vestiges in cognition, the social nature of concepts, and the need to compare conceptual representations between humans and AI systems.
I just got word that this paper was accepted for the AAAI-15 Workshop on AI and Ethics: I've uploaded a preprint here. I'm hoping that this could help seed a possibly valuable new subfield of FAI research. Thanks to Steve Rayhawk for invaluable assistance while I was writing this paper: it probably wouldn't have gotten done without his feedback motivating me to work on this.
Comments welcome. |
4e23ab43-63ef-4a02-bcf2-8ca2059b0a32 | trentmkelly/LessWrong-43k | LessWrong | Why Study Physics?
Physics seems to have a bunch of useful epistemic techniques which haven’t been made very legible yet.
The two big legible epistemic techniques in technical fields are Mathematical Proofs, and The Scientific Method. Either derive logically X from some widely-accepted axioms, or hypothesize X and then do a bunch of experiments which we’d expect to come out some other way if X were false. It seems pretty obvious that science requires a bunch of pieces besides those in order to actually work in practice, but those are the two which we’ve nailed down most thoroughly.
Then there’s less-legible methods. Things like fermi estimates, gears-level models, informal mathematical arguments, an aesthetic sense for kinds-of-models-which-tend-to-generalize-well, the habit of figuring out qualitative features of an answer before calculating it, back-of-the-envelope approximations, etc.
Take informal mathematical arguments, for example. We’re talking about things like the use of infinitesimals in early calculus, or delta functions, or Fourier methods, or renormalization. Physicists used each of these for decades or even centuries before their methods were rigorously proven correct. In each case, one could construct pathological examples in which the tool broke down, yet physicists in practice had a good sense for what kinds-of-things one could and could not do with the tools, based on rough informal arguments. And they worked! In every case, mathematicians eventually came along and set the tools on rigorous foundations, and the tools turned out to work in basically the cases a physicist would expect.
So there’s clearly some epistemic techniques here which aren’t captured by Mathematical Proof + The Scientific Method. Physicists were able to figure out correct techniques before the proofs were available. The Scientific Method played a role - physicists could check their results against real-world data - but that’s mostly just checking the answer. The hard part was to figure out wh |
7ed4883c-fc60-4da4-a5a4-875d387190a4 | trentmkelly/LessWrong-43k | LessWrong | Glen Weyl: "Why I Was Wrong to Demonize Rationalism"
Glen Weyl reflects on his previous disparagement of the social scene surrounding this website, and expresses regret at having been too hostile: while he stands by many of his specific criticisms, he now thinks "rationalists" should be seen as more similar to other intellectual communities (which can be allied on some issues if not others), rather than a uniquely nefarious threat. (October 2021, 1300 words) |
d3e54f26-834c-4bd0-a40b-e61bb5c6ab1d | trentmkelly/LessWrong-43k | LessWrong | [HPMoR Podcast] A Musical Help Request
I am quickly approaching Self-Awareness Part 1, which readers may remember as the chapter with the Ghostbusters song. Anyone who's listened to the audio of Chapter 6a knows that I have no singing talent whatsoever. :) I'm panicked at having to try to sing an actual song that many people love. So I'm asking for help.
Those familiar with the chapter know that a number of students all sing altered lyrics together over the Ghostbusters Theme music. If a number of people could sing these lyrics into a microphone and send me the file I could mix them all together and overlay them on the original music. I think/hope that would sound rather cool, and I'd mention everyone who contributed by name (either real or online, your choice) in the credits (unless they'd rather not be named of course).
If you know other HPMoR fans IRL and can get a number of them in the same room to all sing at once into a microphone, even better! It could even be fun!
I'm terrified of doing this solo, please help! The sound files can be sent to HPMoRPodcast AT gmail.com and I'd need them by midnight on Monday, June 6th. Thanks!
The original song can be found here: http://www.youtube.com/watch?v=iCHFVTQKqdQ
The alternate lyrics are below:
To the tune of "Ghostbusters"
(As performed on the kazoo by Fred and George Weasley,
and sung by Lee Jordan.)
.
There's a Dark Lord near?
Got no need to fear
Who you gonna call?
"HARRY POTTER!" shouted Lee Jordan, and the Weasley twins performed a triumphant chorus.
With a Killing Curse?
Well it could be worse.
Who you gonna call?
"HARRY POTTER!" There were a lot more voices shouting it this time.
I ain't afraid of Dark Lords!
I ain't afraid of Dark Lords!
Dark robes and a mask?
Impossible task?
Who you gonna call?
HARRY POTTER!
Giant Fire-Ape?
Old bat in a cape?
Who you gonna call?
HARRY POTTER!
:
I ain't afraid of Dark Lords!
I ain't afraid of Dark Lords!
|
ff5d0834-f3be-4c0e-80f0-53692cfb5d46 | trentmkelly/LessWrong-43k | LessWrong | [LINK] Q&A with John E. Laird and Kristinn R. Thorisson on risks from AI
(Not added to the previous post so as not to foul up your RSS feeds.)
The previous interview in XiXiDu's series, not posted here but on his site. They're all indexed on the wiki.
Professor John E. Laird is the founder of Soar Technology, an Ann Arbor company specializing in creating autonomous AI entities. His major research interest is in creating human-level artificial intelligent entities, with an emphasis on the underlying cognitive architecture.
Dr. Kristinn R. Thorisson has been developing A.I. systems and technologies for over two decades. He is the Coordinator / Principal Investigator of the HUMANOBS FP7 project and co-author of the AERA architecture, with Eric Nivel, which targets artificial general intelligence. A key driving force behind the project is Thorisson’s new Constructivist Methodology which lays out principles for why and how AI architectures must be given introspective and self-programming capabilities. |
f12c3ba5-4934-4d1e-94a6-cd07bfcdbc2e | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Understanding the Lottery Ticket Hypothesis
*Financial status: This is independent research. I welcome* [*financial support*](https://www.alexflint.io/donate.html) *to make further posts like this possible.*
*Epistemic status: The thread of research I’m reviewing here has been contentious in the past so I expect to make updates based on the comments.*
---
Outline
-------
* This is my attempt to understand the lottery ticket hypothesis.
* I review the original paper, as well as two follow-up papers and one related paper.
* I also go over four previous lesswrong posts on this subject.
Introduction
------------
The lottery ticket hypothesis is a hypothesis about how neural network training works. It was proposed in 2018 by Jonathan Frankle, a PhD student at MIT, and Michael Carbin, a professor at MIT. It suggests that, in a certain sense, much of the action in neural network training is during initialization, not during optimization.
Research that sheds light on neural network training is relevant to alignment because neural network architectures may eventually become large enough to express dangerous patterns of cognition, and it seems unlikely that these patterns of cognition can be detected by input/output evaluations alone, so our only choices seem to be (1) abandon the contemporary machine learning paradigm and seek a new paradigm, or (2) augment the contemporary machine learning paradigm with some non-input/output method sufficient to avoid deploying dangerous patterns of cognition. Insights into contemporary machine learning effectiveness is relevant both to determining whether course (1) or (2) is more promising, and to executing course (2) if that turns out to be the better course.
The lottery ticket hypothesis
-----------------------------
The lottery ticket hypothesis (or LTH), as originally articulated by [Frankle and Carbin](https://arxiv.org/abs/1803.03635), says:
>
> A randomly-initialized, dense neural network contains a subnetwork that is initialized such that—when trained in isolation—it can match the test accuracy of the original network after training for at most the same number of iterations.
>
>
>
A "subnetwork" means that you take the original neural network and clamp some of the weights to zero, so a network with N.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
weights has 2N subnetworks. The lottery ticket *conjecture*, which is an extension of the lottery ticket hypothesis but which Frankle and Carbin are careful to point out is not tested directly in their original paper says:
>
> SGD seeks out and trains a subset of well-initialized weights. Dense, randomly-initialized networks are easier to train than the sparse networks that result from pruning because there are more possible subnetworks from which training might recover a winning ticket.
>
>
>
The lottery ticket conjecture is much more interesting from the perspective of understanding neural network training, and seems to be referred to as the "Lottery Ticket Hypothesis" both in the literature and on this site, so I will follow that convention in the remainder of this post. I will take the original lottery ticket hypothesis as an implication.
In neural network training, in the absence of the lottery ticket hypothesis, we think of the role of SGD as pushing the weights of the overall network towards a local minimum of a loss function, which measures the performance of the network on a training set:
On this view, the point in the loss landscape at which we begin optimizing — that is, the way we initialize the weights at the beginning of training — is not really where the main action is. Initialization might well be important, but the main point of initialization is to start in some not-completely-crazy part of the loss landscape, after which the optimization algorithm does the real work.
The lottery ticket hypothesis says that actually we should view a neural network as an ensemble of a huge number of sparse subnetworks, and that there is some as-yet poorly understood property of the initial weights of each of these subnetworks that determines to how quickly they will learn during training, and how well they will generalize at the end of training. The lottery ticket hypothesis says that among this huge ensemble of subnetworks, some number of subnetworks have this "trainability" property by virtue of having been initialized in accord with this as-yet poorly understood property. What the optimization algorithm is implicitly doing, then, is (1) identifying which subnetworks have this property, (2) training and upweighting them, and (3) downweighting the other networks that do not have this property.
Now this "lottery ticket view" of what is happening during neural network training does not exactly overturn the classical "whole network optimization view". The two views are of course compatible. But the lottery ticket hypothesis does make predictions, such as that it might be possible to give our entire network the as-yet poorly understood initialization property and improve training performance.
Now it’s not that the winning "lottery ticket" is already trained, it just has some property that causes it to be efficiently trainable. There is some follow-up work concerning untrained subnetworks, but I do not believe that it suggests that neural network training consists of simply picking a subnetwork that already solves the task at hand. I discuss that work below under "supermasks" and "weight-agnostic networks".
Also, if some network *does* contain a subnetwork that would perform well at a task on its own, that does not mean that there is a neuron within the network that expresses the output of this particular subnetwork. The output of any one neuron will be a combination of the outputs of all the subnetworks that do not mask that neuron out, which will generally include exponentially many subnetworks.
So how did Frankle and Carbin actually investigate this? They used the following procedure:
1. Train a dense neural network on a computer vision task
2. After training, pick a subnetwork that discards the bottom X% of weights ranked by absolute magnitude, for some values of X from 5% to 95%
3. Now reset the remaining weights to the value they had when the original dense network was initialized
4. Train this reduced subnetwork on the same computer vision task
5. Compare this to training the same reduced subnetwork initialized with freshly randomized weights
It turns out that the subnetworks produced by step 4 train faster and ultimately generalize better than the subnetworks produced by step 5. On this basis the authors conjecture that there was some special property of the initialization of this particular subnetwork, and that due to this property it trained efficiently relative to its peers, and that it was thus implicitly upweighted by the optimization algorithm.
In many of the experiments in the paper, the authors actually iterate steps 2-4 several times, pruning the weights gradually over several re-training phases rather than all at once after training the dense network just once.
When running experiments with larger architectures (VGG-19 and ResNet), the authors find:
>
> We continue to find winning tickets for all of these architectures; however, our method for finding them, iterative pruning, is sensitive to the particular learning rate used.
>
>
>
In some [follow-up work](https://arxiv.org/abs/1903.01611), Frankle and Carbin also found it necessary to use "late resetting", which means resetting the weights of the subnetwork not to their original values from initialization but to their values from 1% - 7% of the way through training the dense network.
Deconstructing lottery tickets: signs, zeros, and the supermask
---------------------------------------------------------------
The suggestion that there might be some property of the initialization of a neural network that causes it to learn quickly and generalize well has prompted follow-up work trying to uncover the exact nature of that property. Zhou et al from Uber AI [investigated](https://arxiv.org/pdf/1905.01067.pdf) the lottery ticket hypothesis and found the following.
First, among the weights that the sparse subnetwork is reset to in step 3, only the sign matters. If you replace all the positive weights with 1 and all the negative weights with -1 then the subnetwork still trains efficiently and generalizes well, but if you randomize all the weights then this property disappears.
Second, if instead of clamping the pruned weights to zero you clamp them to their initial value from the dense network, then the good performance disappears. They hypothesize that clamping small-magnitude weights to zero is actually acting as a form of training since perhaps those weights were heading towards zero anyway.
Thirdly, and most fascinatingly, that they can find some "supermasks" such that merely applying the mask to an untrained dense network already produces better-than-random results. It is *not* that the supermask identifies a subnetwork that already solves the task. The untrained subnetwork identified by a supermask actually performs very poorly by the standards of any trained supervised learning system: 20% error on MNIST compared to 12% achieved by linear classification and 0.18% achieved by trained convnets. But 20% error is much better than the 90% error you would expect from chance predictions[[1]](#fn-DsSta2MsnyP378ftL-1). The authors suggest that we think of masking as a form of training.
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
------------------------------------------------------------------------------------------------------
In this paper, Morcos et al [show](https://arxiv.org/pdf/1906.02773.pdf) that a "winning lottery ticket" subnetwork found by training a dense network on one dataset with one optimization algorithm still retains its attractive properties of efficient training and good generalization when the subnetwork is later trained on a different dataset or optimized by a different optimizer.
Weight-agnostic neural networks
-------------------------------
This [paper from Google Brain](https://weightagnostic.github.io/) finds neural network architectures that perform well *no matter which weights are inserted into them*. They demonstrate networks solving reinforcement learning problems when all of their weights are set to 1, and then still solving the same problem when all of their weights are set to 2, 10, etc. The paper uses a completely different method to find architectures from the one used by Frankle and Carbin so this paper is a bit outside the lottery ticket literature, but it provides further evidence that weight training may not be the entirety of "where the action is at" in neural network training.
Daniel on the lottery ticket hypothesis and scaling
---------------------------------------------------
Daniel Kokotajlo asks whether the lottery ticket hypothesis, if true, would suggest that machine learning will continue to solve more problems as we apply more computing power to it. The reason is that more computing power means we can train networks with more parameters, which means we are searching over more "lottery tickets", which might mean we are able to solve increasingly difficult problems where lottery tickets are harder and harder to come by.
Evan on the lottery ticket hypothesis and deep double descent
-------------------------------------------------------------
Evan Hubinger [speculates](https://www.lesswrong.com/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent) about whether the lottery ticket hypothesis might explain the deep double descent phenomenon:
>
> My guess as to how double descent works if the Lottery Tickets Hypothesis is true is that in the interpolation regime SGD gets to just focus on the winning tickets and ignore the others—since it doesn't have to use the full model capacity—whereas on the interpolation threshold SGD is forced to make use of the full network (to get the full model capacity), not just the winning tickets, which hurts generalization. [...] That's just speculation on my part, however
>
>
>
John on the lottery ticket hypothesis and parameter tangent spaces
------------------------------------------------------------------
John Swentworth [proposes](https://www.lesswrong.com/posts/i9p5KWNWcthccsxqm/updating-the-lottery-ticket-hypothesis) an update to the lottery ticket hypothesis informed by recent results that show that the weights of large neural networks actually don’t change very much over the course of training on practical machine learning problems:
>
> At initialization, we randomly choose θ0, and that determines the parameter tangent space - that’s our set of "lottery tickets". The SGD training process then solves the equations - it picks out the lottery tickets which perfectly match the data. In practice, there will be many such lottery tickets - many solutions to the equations - because modern nets are extremely overparameterized. SGD effectively picks one of them at random
>
>
>
I don’t yet understand this proposal. In what way do we decompose this parameter tangent space into "lottery tickets"? Are the lottery tickets the cross product of subnetworks and points in the parameter tangent space? The subnetworks alone? If the latter then how does this differ from the original lottery ticket hypothesis?
John quotes the following synopsis of the lottery ticket hypothesis:
>
> When the network is randomly initialized, there is a sub-network that is already decent at the task. Then, when training happens, that sub-network is reinforced and all other sub-networks are dampened so as to not interfere.
>
>
>
The "supermask" results above *do* suggest that this synopsis is accurate, so far as I can tell, but it’s important to realize that "decent" might mean "better than random but worse than linear regression", and the already-present subnetwork does not *just* get reinforced during training, it also gets trained to a very significant extent. There is a [thread](https://www.lesswrong.com/posts/wFJqi75y9eW8mf8TR/does-the-lottery-ticket-hypothesis-suggest-the-scaling?commentId=tBAqAjB8GF5Qb6afG) between Daniel Kokotajlo and Daniel Filan about this synopsis that references several papers I haven’t reviewed yet. They seem to agree that this synopsis is at least not implied by the experiments in the original lottery ticket hypothesis paper, which I agree is true.
Abram on the lottery ticket hypothesis and deception
----------------------------------------------------
Abram [points out](https://www.lesswrong.com/posts/wpbpvjZCK3JhzpR2D/gradations-of-inner-alignment-obstacles) that the lottery ticket hypothesis being true could be disheartening news from the perspective of safety:
>
> My Contentious Position for this subsection: Some versions of the lottery ticket hypothesis seem to imply that deceptive circuits are already present at the beginning of training.
>
>
>
Daniel provides the following [helpful summary](https://www.lesswrong.com/posts/pTm6aEvmepJEA5cuK/parsing-abram-on-gradations-of-inner-alignment-obstacles?commentId=erMEi3dh5FTMSe33K) of Abram’s argument in a comment:
>
> [the parameter tangent space version of the lottery ticket hypothesis] seems to be saying that the training process basically just throws away all the tickets that score less than perfectly, and randomly selects one of the rest. This means that tickets which are deceptive agents and whatnot are in there from the beginning, and if they score well, then they have as much chance of being selected at the end as anything else that scores well. And since we should expect deceptive agents that score well to outnumber aligned agents that score well... we should expect deception.
>
>
>
I previously [attempted to summarize](https://www.lesswrong.com/posts/pTm6aEvmepJEA5cuK/parsing-abram-on-gradations-of-inner-alignment-obstacles) this post by Abram.
Conclusion
----------
It’s exciting to see these insights being developed within the mainstream machine learning literature. It’s also exciting to see their safety implications beginning to be fleshed out here. I hope this post helps by summarizing some of the experimental results that have led to these hypotheses.
---
1. We would expect 90% error from chance predictions since MNIST is a handwritten digit recognition dataset with 10 possible labels. [↩︎](#fnref-DsSta2MsnyP378ftL-1) |
9e43119f-4bb1-4ab1-908b-61d4625139b2 | StampyAI/alignment-research-dataset/blogs | Blogs | MIRI’s Approach
MIRI’s mission is “to ensure that the creation of smarter-than-human artificial intelligence has a positive impact.” How can we ensure any such thing? It’s a daunting task, especially given that we don’t have any smarter-than-human machines to work with at the moment. In the previous post I discussed four [background claims](https://intelligence.org/2015/07/24/four-background-claims/) that motivate our mission; in this post I will describe our approach to addressing the challenge.
This challenge is sizeable, and we can only tackle a portion of the problem. For this reason, we specialize. Our two biggest specializing assumptions are as follows:
**We focus on scenarios where smarter-than-human machine intelligence is first created in *de novo* software systems (as opposed to, say, brain emulations).**
This is in part because it seems difficult to get all the way to brain emulation before someone reverse-engineers the algorithms used by the brain and uses them in a software system, and in part because we expect that any highly reliable AI system will need to have at least some components built from the ground up for safety and transparency. Nevertheless, it is quite plausible that early superintelligent systems will not be human-designed software, and I strongly endorse research programs that focus on reducing risks along the other pathways.
**We specialize almost entirely in technical research.**
We select our researchers for their proficiency in mathematics and computer science, rather than forecasting expertise or political acumen. I stress that this is only one part of the puzzle: figuring out how to build the right system is useless if the right system does not in fact get built, and ensuring AI has a positive impact is not simply a technical problem. It is also a global coordination problem, in the face of short-term incentives to cut corners. Addressing these non-technical challenges is an important task that we do not focus on.
In short, MIRI does technical research to ensure that *de novo* AI software systems will have a positive impact. We do not further discriminate between different types of AI software systems, nor do we make strong claims about exactly how quickly we expect AI systems to attain superintelligence. Rather, our current approach is to select open problems using the following question:
*What would we still be unable to solve, even if the challenge were far simpler?*
For example, we might study AI alignment problems that we could not solve even if we had lots of computing power and very simple goals.
We then filter on problems that are (1) tractable, in the sense that we can do productive mathematical research on them today; (2) uncrowded, in the sense that the problems are not likely to be addressed during normal capabilities research; and (3) critical, in the sense that they could not be safely delegated to a machine unless we had first solved them ourselves. (Since the goal is to design intelligent machines, there are many technical problems that we can expect to eventually delegate to those machines. But it is difficult to trust an unreliable reasoner with the task of designing reliable reasoning!)
These three filters are usually uncontroversial. The controversial claim here is that the above question — “what would we be unable to solve, even if the challenge were simpler?” — is a generator of open technical problems for which solutions will help us design safer and more reliable AI software in the future, regardless of their architecture. The rest of this post is dedicated to justifying this claim, and describing the reasoning behind it.
#### 1. Creating a powerful AI system without understanding why it works is dangerous.
A large portion of the risk from machine superintelligence comes from the possibility of people building [systems that they do not fully understand](https://intelligence.org/2013/08/25/transparency-in-safety-critical-systems/).
Currently, this is commonplace in practice: many modern AI researchers are pushing the capabilities of deep neural networks in the absence of theoretical foundations that describe why they’re working so well or a solid idea of what goes on beneath the hood. These shortcomings are being addressed over time: many AI researchers are currently working on transparency tools for neural networks, and many more are working to put theoretical foundations beneath deep learning systems. In the interim, using trial and error to push the capabilities of modern AI systems has led to many useful applications.
When designing a superintelligent agent, by contrast, we will want an unusually high level of confidence in its safety *before* we begin online testing: trial and error alone won’t cut it, in that domain.
To illustrate, consider a study by [Bird and Layzell in 2002](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1004522). They used some simple genetic programming to design an oscillating circuit on a circuit board. One solution that the genetic algorithm found entirely avoided using the built-in capacitors (an essential piece of hardware in human-designed oscillators). Instead, it repurposed the circuit tracks on the motherboard as a radio receiver, and amplified an oscillating signal from a nearby computer.
This demonstrates that powerful search processes can often reach their goals via unanticipated paths. If Bird and Layzell were hoping to use their genetic algorithm to find code for a robust oscillating circuit — one that could be used on many different circuit boards regardless of whether there were other computers present — then they would have been sorely disappointed. Yet if they had tested their algorithms extensively on a virtual circuit board that captured all the features of the circuit board that they *thought* were relevant (but not features such as “circuit tracks can carry radio signals”), then they would not have noticed the potential for failure during testing. If this is a problem when handling simple genetic search algorithms, then it will be a much larger problem when handling smarter-than-human search processes.
When it comes to designing smarter-than-human machine intelligence, extensive testing is essential, but not sufficient: in order to be confident that the system will not find unanticipated bad solutions when running in the real world, it is important to have a solid understanding of how the search process works and why it is expected to generate only satisfactory solutions *in addition* to empirical test data.
MIRI’s research program is aimed at ensuring that we have the tools needed to inspect and analyze smarter-than-human search processes before we deploy them.
By analogy, neural net researchers could probably have gotten quite far without having any formal understanding of probability theory. Without probability theory, however, they would lack the tools needed to understand modern AI algorithms: they wouldn’t know about Bayes nets, they wouldn’t know how to formulate assumptions like “independent and identically distributed,” and they wouldn’t quite know the conditions under which Markov Decision Processes work and fail. They wouldn’t be able to talk about priors, or check for places where the priors are zero (and therefore identify things that their systems cannot learn). They wouldn’t be able to talk about bounds on errors and prove nice theorems about algorithms that find an optimal policy eventually.
They probably could have still gotten pretty far (and developed half-formed ad-hoc replacements for many of these ideas), but without probability theory, I expect they would have a harder time designing highly reliable AI algorithms. Researchers at MIRI tend to believe that similarly large chunks of AI theory are still missing, and *those* are the tools that our research program aims to develop.
#### 2. We could not yet create a beneficial AI system even via brute force.
Imagine you have a Jupiter-sized computer and a very simple goal: Make the universe contain as much diamond as possible. The computer has access to the internet and a number of robotic factories and laboratories, and by “diamond” we mean carbon atoms covalently bound to four other carbon atoms. (Pretend we don’t care how it makes the diamond, or what it has to take apart in order to get the carbon; the goal is to study a simplified problem.) Let’s say that the Jupiter-sized computer is running python. How would you program it to produce lots and lots of diamond?
As it stands, we do not yet know how to program a computer to achieve a goal such as that one.
We couldn’t yet create an artificial general intelligence *by brute force*, and this indicates that there are parts of the problem we don’t yet understand.
There are a number of AI tasks that we *could* brute-force. For example, we could write a program that would be *really, really good* at solving computer vision problems: if we had an indestructible box that produced pictures and questions about them, waited for answers, scored the answers for accuracy, and then repeated the process, then we know how to write the program that interacts with that box and gets very good at answering the questions. (The program would essentially be a bounded version of [AIXI](http://lesswrong.com/lw/jg1/solomonoff_cartesianism/).)
By a similar method, if we had an indestructible box that produced a conversation and questions about it, waited for natural-language answers to the questions, and scored them for accuracy, then again, we could write a program that would get very good at answering well. In this sense, we know how to solve computer vision and natural language processing by brute force. (Of course, natural-language processing is nowhere near “solved” in a practical sense — there is still loads of work to be done. A brute force solution doesn’t get you very far in the real world. The point is that, for many AI alignment problems, we haven’t even made it to the “we could brute force it” level yet.)
Why do we need the indestructible box in the above examples? Because the way the modern brute-force solution would work is by considering each Turing machine (up to some complexity limit) as a hypothesis about the box, seeing which ones are consistent with observation, and then executing actions that lead to high scores coming out of the box (as predicted by the remaining hypotheses, weighted by simplicity).
Each hypothesis is an opaque Turing machine, and the algorithm never peeks inside: it just asks each hypothesis to predict what score the box will output if it executes a certain action chain. This means that if the algorithm finds (via exhaustive search) a plan that *maximizes* the score coming out of the box, and the box is destructible, then the opaque action chain that maximizes score is very likely to be the one that pops the box open and alters it so that it always outputs the highest score. But given an indestructible box, we know how to brute force the answers.
In fact, roughly speaking, we understand how to solve *any* reinforcement learning problem via brute force. This is a far cry from knowing how to *practically* solve reinforcement learning problems! But it does illustrate a difference in kind between two types of problems. We can (imperfectly and heuristically) divide AI problems up as follows:
*There are two types of open problem in AI. One is figuring how to solve in practice problems that we know how to solve in principle. The other is figuring out how to solve in principle problems that we don’t even know how to brute force yet.*
MIRI focuses on problems of the second class.[1](https://intelligence.org/2015/07/27/miris-approach/#footnote_0_11901 "Most of the AI field focuses on problems of the first class. Deep learning, for example, is a very powerful and exciting tool for solving problems that we know how to brute-force, but which were, up until a few years ago, wildly intractable. Class 1 problems tend to be important problems for building more capable AI systems, but lower-priority for ensuring that highly capable systems are aligned with our interests.")
What is hard about brute-forcing a diamond-producing agent? To illustrate, I’ll give a wildly simplified sketch of what an AI program needs to do in order to act productively within a complex environment:
1. Model the world: Take percepts, and use them to refine some internal representation of the world the system is embedded in.
2. Predict the world: Take that world-model, and predict what would happen if the system executed various different plans.
3. Rank outcomes: Rate those possibilities by how good the predicted future is, then execute a plan that leads to a highly-rated outcome.[2](https://intelligence.org/2015/07/27/miris-approach/#footnote_1_11901 "In reality, of course, there aren’t clean separations between these steps. The “prediction” step must be more of a ranking-dependent planning step, to avoid wasting computation predicting outcomes that will obviously be poorly-ranked. The modeling step depends on the prediction step, because which parts of the world-model are refined depends on what the world-model is going to be used for. A realistic agent would need to make use of meta-planning to figure out how to allocate resources between these activities, etc. This diagram is a fine first approximation, though: if a system doesn’t do something like modeling the world, predicting outcomes, and ranking them somewhere along the way, then it will have a hard time steering the future.")
[](http://intelligence.org/wp-content/uploads/2015/07/3-step-AI.png)
Consider the modeling step. As discussed above, we know how to write an algorithm that finds good world-models by brute force: it looks at lots and lots of Turing machines, weighted by simplicity, treats them like they are responsible for its observations, and throws out the ones that are inconsistent with observation thus far. But (aside from being wildly impractical) this yields only *opaque* hypotheses: the system can ask what “sensory bits” each Turing machine outputs, but it cannot peek inside and examine objects represented within.
If there is some well-defined “score” that gets spit out by the opaque Turing machine (as in a reinforcement learning problem), then it doesn’t matter that each hypothesis is a black box; the brute-force algorithm can simply run the black box on lots of inputs and see which results in the highest score. But if the problem is to build lots of diamond in the real world, then the agent must work as follows:
1. Build a model of the world — one that represents carbon atoms and covalent bonds, among other things.
2. Predict how the world would change contingent on different actions the system could execute.
3. Look *inside* each prediction and see which predicted future has the most diamond. Execute the action that leads to more diamond.
In other words, an AI that is built to reliably affect *things in the world*needs to have world-models that are amenable to inspection. The system needs to be able to pop open the world model, identify the representations of carbon atoms and covalent bonds, and estimate how much diamond is in the real world.[3](https://intelligence.org/2015/07/27/miris-approach/#footnote_2_11901 "In reinforcement learning problems, this issue is avoided via a special “reward channel” intended to stand in indirectly for something the supervisor wants. (For example, the supervisor may push a reward button every time the learner takes an action that seems, to the supervisor, to be useful for making diamonds.) Then the programmers can, by hand, single out the reward channel inside the world-model and program the system to execute actions that it predicts lead to high reward. This is much easier than designing world-models in such a way that the system can reliably identify representations of carbon atoms and covalent bonds within it (especially if the world is modeled in terms of Newtonian mechanics one day and quantum mechanics the next), but doesn’t provide a framework for agents that must autonomously learn how to achieve some goal. Correct behavior in highly intelligent systems will not always be reducible to maximizing a reward signal controlled by a significantly less intelligent system (e.g., a human supervisor).")
We don’t yet have a clear picture of how to build “inspectable” world-models — not even by brute force. Imagine trying to write the part of the diamond-making program that builds a world-model: this function needs to take percepts as input and build a data structure that represents the universe, in a way that allows the system to inspect universe-descriptions and estimate the amount of diamond in a possible future. Where in the data structure are the carbon atoms? How does the data structure allow the concept of a “covalent bond” to be formed and labeled, in such a way that it remains accurate even as the world-model stops representing diamond as made of atoms and starts representing them as made of protons, neutrons, and electrons instead?
We need a world-modeling algorithm that builds multi-level representations of the world and allows the system to pursue the same goals (make diamond) even as its model changes drastically (because it discovers quantum mechanics). This is in stark contrast to the existing brute-force solutions that use opaque Turing machines as hypotheses.[4](https://intelligence.org/2015/07/27/miris-approach/#footnote_3_11901 "The idea of a search algorithm that optimizes according to modeled facts about the world rather than just expected percepts may sound basic, but we haven’t found any deep insights (or clever hacks) that allow us to formalize this idea (e.g., as a brute-force algorithm). If we could formalize it, we would likely get a better understanding of the kind of abstract modeling of objects and facts that is required for self-referential, logically uncertain, programmer-inspectable reasoning.")
When *humans* reason about the universe, we seem to do some sort of reasoning outwards from the middle: we start by modeling things like people and rocks, and eventually realize that these are made of atoms, which are made of protons and neutrons and electrons, which are perturbations in quantum fields. At no point are we certain that the lowest level in our model is the lowest level in reality; as we continue thinking about the world we *construct* new hypotheses to explain oddities in our models. What sort of data structure are we using, there? How do we add levels to a world model given new insights? This is the sort of reasoning algorithm that we do not yet understand how to formalize.[5](https://intelligence.org/2015/07/27/miris-approach/#footnote_4_11901 "We also suspect that a brute-force algorithm for building multi-level world models would be much more amenable to being “scaled down” than Solomonoff induction, and would therefore lend some insight into how to build multi-level world models in a practical setting.")
That’s step *one* in brute-forcing an AI that reliably pursues a simple goal. We also don’t know how to brute-force steps two or three yet. By simplifying the problem — talking about diamonds, for example, rather than more realistic goals that raise a host of other difficulties — we’re able to factor out the parts of the problems that we don’t understand how to solve yet, even in principle. Our [technical agenda](https://intelligence.org/files/TechnicalAgenda.pdf) describes a number of open problems identified using this method.
#### 3. Figuring out how to solve a problem in principle yields many benefits.
In 1836, Edgar Allen Poe wrote a [wonderful essay](http://www.eapoe.org/works/essays/maelzel.htm) on Maelzel’s Mechanical Turk, a machine that was purported to be able to play chess. In the essay, Poe argues that the Mechanical Turk must be a hoax: he begins by arguing that machines cannot play chess, and proceeds to explain (using his knowledge of stagecraft) how a person could be hidden within the machine. Poe’s essay is remarkably sophisticated, and a fun read: he makes reference to the “calculating machine of Mr. Babbage” and argues that it cannot possibly be made to play chess, because in a calculating machine, each steps follows from the previous step by necessity, whereas “no one move in chess necessarily follows upon any one other”.
The Mechnical Turk indeed turned out to be a hoax. In 1950, however, Claude Shannon published a rather compelling counterargument to Poe’s reasoning in the form of a paper [explaining how to program a computer to play perfect chess](http://vision.unipv.it/IA1/ProgrammingaComputerforPlayingChess.pdf).
Shannon’s algorithm was by no means the end of the conversation. It took forty-six years to go from that paper to Deep Blue, a practical chess program which beat the human world champion. Nevertheless, if you were equipped with Poe’s state of knowledge and not yet sure whether it was *possible* for a computer to play chess — because you did not yet understand algorithms for constructing game trees and doing backtracking search — then you would probably not be ready to start writing practical chess programs.
Similarly, if you lacked the tools of probability theory — an understanding of Bayesian inference and the limitations that stem from bad priors — then you probably wouldn’t be ready to program an AI system that needed to manage uncertainty in high-stakes situations.
If you are trying to write a program and you can’t yet say how you would write it given an arbitrarily large computer, then you probably aren’t yet ready to design a practical approximation of the brute-force solution yet. Practical chess programs can’t generate a full search tree, and so rely heavily on heuristics and approximations; but if you can’t brute-force the answer yet given *arbitrary* amounts of computing power, then it’s likely that you’re missing some important conceptual tools.
Marcus Hutter (inventor of AIXI) and Shane Legg (inventor of the [Universal Measure of Intelligence](http://www.vetta.org/documents/42.pdf)) seem to endorse this approach. Their work can be interpreted as a description of how to find a brute-force solution to any reinforcement learning problem, and indeed, the above description of how to do this is due to Legg and Hutter.
In fact, the founders of Google DeepMind reference the completion of Shane’s thesis as one of four key indicators that the time was ripe to begin working on AGI: a theoretical framework describing how to solve reinforcement learning problems *in principle* demonstrated that modern understanding of the problem had matured to the point where it was time for the practical work to begin.
Before we gain a formal understanding of the problem, we can’t be quite sure what the problem *is*. We may fail to notice holes in our reasoning; we may fail to bring the appropriate tools to bear; we may not be able to tell when we’re making progress. After we gain a formal understanding of the problem in principle, we’ll be in a better position to make practical progress.
The point of developing a formal understanding of a problem is not to *run* the resulting algorithms. Deep Blue did not work by computing a full game tree, and DeepMind is not trying to implement AIXI. Rather, the point is to identify and develop the basic concepts and methods that are useful for solving the problem (such as game trees and backtracking search algorithms, in the case of chess).
The development of probability theory has been quite useful to the field of AI — not because anyone goes out and attempts to build a perfect Bayesian reasoner, but because probability theory is the unifying theory for reasoning under uncertainty. This makes the tools of probability theory useful for AI designs that vary in any number of implementation details: any time you build an algorithm that attempts to manage uncertainty, a solid understanding of probabilistic inference is helpful when reasoning about the domain in which the system will succeed and the conditions under which it could fail.
This is why we think we can identify open problems that we can work on today, and which will reliably be useful no matter how the generally intelligent machines of the future are designed (or how long it takes to get there). By seeking out problems that we couldn’t solve even if the problem were much easier, we hope to identify places where core AGI algorithms are missing. By developing a formal understanding of how to address those problems in principle, we aim to ensure that when it comes time to address those problems in practice, programmers have the knowledge they need to develop solutions that they deeply understand, and the tools they need to ensure that the systems they build are highly reliable.
#### 4. This is an approach researchers have used successfully in the past.
Our main open-problem generator — “what would we be unable to solve even if the problem were easier?” — is actually a fairly common one used across mathematics and computer science. It’s more easy to recognize if we rephrase it slightly: “can we reduce the problem of building a beneficial AI to some other, simpler problem?”
For example, instead of asking whether you can program a Jupiter-sized computer to produce diamonds, you could rephrase this as a question about whether we can reduce the diamond maximization problem to known reasoning and planning procedures. (The current answer is “not yet.”)
This is a fairly standard practice in computer science, where reducing one problem to another is a [key feature of computability theory](https://en.wikipedia.org/wiki/Reduction_(complexity)). In mathematics it is common to achieve a proof by reducing one problem to another (see, for instance, the famous case of [Fermat’s last theorem](http://mathworld.wolfram.com/FermatsLastTheorem.html)). This helps one focus on the parts of the problem that *aren’t* solved, and identify topics where foundational understanding is lacking.
As it happens, humans have a pretty good track record when it comes to working on problems such as these. Humanity hasn’t been very good at predicting long-term technological trends, but we have reasonable success developing theoretical foundations for technical problems decades in advance, when we put sufficient effort into it. Alan Turing and Alonzo Church succeeded in developing a robust theory of computation that proved quite useful once computers were developed, in large part by figuring out how to solve (in principle) problems which they did not yet know how to solve with machines. Andrey Kolmogorov, similarly, set out to formalize intuitive but not-yet-well-understood methods for managing uncertainty; and he succeeded. And Claude Shannon and his contemporaries succeeded at this endeavor in the case of chess.
The development of probability theory is a particularly good analogy to our case: it is a field where, for hundreds of years, philosophers and mathematicians who attempted to formalize their intuitive notions of “uncertainty” repeatedly reasoned themselves into paradoxes and contradictions. The probability theory at the time, sorely lacking formal foundations, was dubbed a “theory of misfortune.” Nevertheless, a concerted effort by Kolmogorov and others to formalize the theory was successful, and his efforts inspired the development of a host of useful tools for designing systems that reason reliably under uncertainty.
Many people who set out to put foundations under a new field of study (that was intuitively understood on some level but not yet formalized) have succeeded, and their successes have been practically significant. We aim to do something similar for a number of open problems pertaining to the design of highly reliable reasoners.
The questions MIRI focuses on, such as “how would one ideally handle logical uncertainty?” or “how would one ideally build multi-level world models of a complex environment?”, exist at a level of generality comparable to Kolmogorov’s “how would one ideally handle empirical uncertainty?” or Hutter’s “how would one ideally maximize reward in an arbitrarily complex environment?” The historical track record suggests that these are the kinds of problems that it is possible to both (a) see coming in advance, and (b) work on without access to a concrete practical implementation of a general intelligence.
By identifying parts of the problem that we would still be unable to solve even if the problem was easier, we hope to hone in on parts of the problem where core algorithms and insights are missing: algorithms and insights that will be useful no matter what architecture early intelligent machines take on, and no matter how long it takes to create smarter-than-human machine intelligence.
At present, there are only three people on our research team, and this limits the number of problems that we can tackle ourselves. But our approach is one that we can scale up dramatically: it has generated a very large number of open problems, and we have no shortage of questions to study.[6](https://intelligence.org/2015/07/27/miris-approach/#footnote_5_11901 "For example, instead of asking what problems remain when given lots of computing power, you could instead ask whether we can reduce the problem of building an aligned AI to the problem of making reliable predictions about human behavior: an approach advocated by others.")
This is an approach that has often worked well in the past for humans trying to understand how to approach a new field of study, and I am confident that this approach is pointing us towards some of the core hurdles in this young field of AI alignment.
---
1. Most of the AI field focuses on problems of the first class. Deep learning, for example, is a very powerful and exciting tool for solving problems that we know how to brute-force, but which were, up until a few years ago, wildly intractable. Class 1 problems tend to be important problems for building more capable AI systems, but lower-priority for ensuring that highly capable systems are aligned with our interests.
2. In reality, of course, there aren’t clean separations between these steps. The “prediction” step must be more of a ranking-dependent planning step, to avoid wasting computation predicting outcomes that will obviously be poorly-ranked. The modeling step depends on the prediction step, because which parts of the world-model are refined depends on what the world-model is going to be used for. A realistic agent would need to make use of meta-planning to figure out how to allocate resources between these activities, etc. This diagram is a fine first approximation, though: if a system doesn’t do something like modeling the world, predicting outcomes, and ranking them somewhere along the way, then it will have a hard time steering the future.
3. In reinforcement learning problems, this issue is avoided via a special “reward channel” intended to stand in indirectly for something the supervisor wants. (For example, the supervisor may push a reward button every time the learner takes an action that seems, to the supervisor, to be useful for making diamonds.) Then the programmers can, by hand, single out the reward channel inside the world-model and program the system to execute actions that it predicts lead to high reward. This is much easier than designing world-models in such a way that the system can reliably identify representations of carbon atoms and covalent bonds within it (especially if the world is modeled in terms of Newtonian mechanics one day and quantum mechanics the next), but doesn’t provide a framework for agents that must autonomously learn how to achieve some goal. Correct behavior in highly intelligent systems will not always be reducible to maximizing a reward signal controlled by a significantly less intelligent system (e.g., a human supervisor).
4. The idea of a search algorithm that optimizes according to modeled *facts about the world* rather than just *expected percepts* may sound basic, but we haven’t found any deep insights (or clever hacks) that allow us to formalize this idea (e.g., as a brute-force algorithm). If we could formalize it, we would likely get a better understanding of the kind of abstract modeling of objects and facts that is required for [self-referential, logically uncertain, programmer-inspectable reasoning](https://intelligence.org/technical-agenda/).
5. We also suspect that a brute-force algorithm for building multi-level world models would be much more amenable to being “scaled down” than Solomonoff induction, and would therefore lend some insight into how to build multi-level world models in a practical setting.
6. For example, instead of asking what problems remain when given lots of computing power, you could instead ask whether we can reduce the problem of building an aligned AI to the problem of making reliable predictions about human behavior: an approach [advocated by others](https://medium.com/ai-control/model-free-decisions-6e6609f5d99e).
The post [MIRI’s Approach](https://intelligence.org/2015/07/27/miris-approach/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
c3d75d10-9fe3-4beb-b4e8-325b86093677 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | What to do with imitation humans, other than asking them what the right thing to do is?
This question is about whether you have clever ideas about how to use AI imitations of humans for AI safety. The two main ideas I'm familiar with only seem to interface with these imitations as if they're humans.
* The most obvious thing one might do with a good predictor of a human is just to write software that queries the imitation human about what the right thing to do is, and then does it.
* The less obvious thing to do is to try and amplify it - e.g. use teams of them working together to try to choose good actions. Or maybe even an IDA loop - use your learner that learned to imitate a human, and train it to imitate the teams working together. Then make teams of teams, etc.
But can we use human imitations to increase the effectiveness of value learning in a way other than amplification/distillation? For example, is there some way of leveraging queries to human imitations to train a non-human AI that has a human-understandable way of thinking about the world?
Keep in mind the challenge that these are only imitation humans, not oracles for the best thing to do, and not even actual humans. So we can't give them problems that are too weird, or heavily optimized by interaction with the imitation humans, because they'll go off-distribution.
Another possible avenue is ways to "look inside" the imitation humans. One analogy would be how if you have an image-generating GAN, you can increase the number of trees in your image by finding the parameters associated with trees and then turning them up. Can you do the same thing with human-imitating GAN, but turning up "act morally" or "be smart?" |
8978a0ee-d4f2-4e79-af35-fa1f4109a9a0 | trentmkelly/LessWrong-43k | LessWrong | [Cosmology Talks] New Probability Axioms Could Fix Cosmology's Multiverse (Partially) - Sylvia Wenmackers
> Sylvia is a philosopher of science. Her focus is probability and she has worked on a few theories that aim to extend and modify the standard axioms of probability in order to tackle paradoxes related to infinite spaces. In particular there is a paradox of the "infinite fair lottery" where within standard probability it seems impossible to write down a "fair" probability function on the integers. If you give the integers any non-zero probability, the total probability of all integers is unbounded, so the function is not normalisable. If you give the integers zero probability, the total probability of all integers is also zero. No other option seems viable for a fair distribution.
> This paradox arises in a number of places within cosmology, especially in the context of eternal inflation and a possible multiverse of big bangs bubbling off. If every bubble is to be treated fairly, and there will ultimately be an unbounded number of them, how do we assign probability?
> The proposed solutions involve hyper-real numbers, such as infinitesimals and infinities with different relative sizes, (reflecting how quickly things converge or diverge respectively).
> The multiverse has other problems, and other areas of cosmology where this issue arises also have their own problems (e.g. the initial conditions of inflation); however this could very well be part of the way towards fixing the cosmological multiverse.
> Sylvia: https://www.sylviawenmackers.be/ Paper: https://arxiv.org/abs/2308.12229
By the way, Shaun is beloved in the local EA community, and Sylvia's work has been cited around these parts more than once in discussions surrounding UDT.
Multiverse measure assignment is interesting, along with the anthropic binding problem, it's a necessary part of getting an indexical prior. While anthropic measure gives you comparisons between observers within universes, multiverse/time cosmological measure gives you a comparison between the universes. |
d7767b40-0899-47a7-bd3c-9e728918f6ee | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | DeepMind x UCL RL Lecture Series - Deep Reinforcement Learning #2 [13/13]
welcome back to the second part of our
introduction to the parallel in the
first section we started by discussing
how deep neural networks can be used for
function approximation in rl and how
automatic differentiation specifically
can support doing so with relative ease
and then we delved into
the issues that arise when doing so when
using deep learning for function
approximation so for instance we
discussed how the different choices that
we make on the rail side affect the
learning dynamics of approximate value
functions through phenomena such like
the deadly triad or
literature propagation
several of the issues that we discussed
though were ultimately issues of
inappropriate generalization so today i
want to talk about some of the ideas
that can be used to help with this
problem by tackling directly the
fundamental problem of representation
learning in rl
i want to stress that this is far from a
solved problem so what i will discuss
today is
you can see it as a partial snapshot of
recent research on this challenging
issue but not an ultimate answer
but the main insight that underlies many
of the things that we will discuss today
i think is quite important and is that
so far our agents
optimized the representation for a very
narrow objective purely the prediction
or maximization of a single scalar
reward and this narrow objective is the
only thing that is driving the entire
representation learning in our d
parallel agents
and this has some advantages so the
ability of building
flexible rich representations that are
kind of tailored to a specific task at
hand is after all the main reason we use
deep learning in the first place but it
does come with some disadvantages
because
such a narrow objective can induce an
overly specific overfitted state
representation that might not support
good generalization and this in turn can
make agents even more susceptible to
issues like the deadly triad
if you agree with this premise then
maybe the natural step is to ask our
agents
uh to learn about more than just a
single task reward have them strive to
build the richer knowledge about the
world
but of course this is you know in a
sense it's easier said than done because
we in order to do so we need to think of
you know what other knowledge should
they learn about and there are many many
possible choices
and since representation learning is not
a problem that is exclusively url we can
also tap into supervised learning
literature for some inspiration
but among the many possible ideas that
can help build these representation i
want to focus on two families of ideas
that have attracted very very a lot of
interest
among our elder researchers in the past
years and these are general value
functions and distributional value
predictions
so let's start with the first general
value functions
if you recall from the beginning of this
course rl is based on the so-called
reward hypothesis that states that any
goal can be represented as the
maximization of a suitable scalar award
and this hypothesis was originally
discussed to argue that
maybe arel as a whole is a sufficiently
formalism for intelligent gold-oriented
behavior but today i want to use this
hypothesis to make a different point and
argue that if this is the case maybe
then all useful knowledge that agents
should or could collect in order to
support learning can also take the form
of predictions about suitable cumulative
scalar signals
but importantly this predictive
knowledge does not need to refer to the
single main task circle reward instead
the agent could make predictions about
many different other scalar quantities
and these predictions would still look
very much like value functions but it
would refer to a different scalar signal
and are therefore typically called
general value functions
the general value function is
in a sense very much like a standard
value function in that it is a
prediction about the expected cumulative
discounted sum of a suitable scalar
signal under a given policy
but in general many functions we make
explicit this dependency on the scalar
signal c and discount factor gamma and
the policy pi because we are open to
make different choices for each of these
so in the gvf language the scalar signal
c that we choose to predict will be
called the cumulant
while the discount gamma associated with
etf we'll still define a horizon for the
predictions that we make about the death
cumulant and the target policy pi is
going to be an arbitrary behavior under
which we compute expectations this will
not necessarily be the agent policy
of course this
may still feel a bit abstract so let's
be even more concrete let's let's
consider some examples so
if the cumulant
is the main task reward and we compute
the expected discounted cumulative sum
of this signal under the agent policy
and under the age and discount then the
gdf prediction problem just reduces to
canonical policy elevation
that is the problem we have dedicated so
much space in previous lectures
but if instead for instance the
cumulative is the still the main task
reward but and we still predict an under
agent policy but with this kind of zero
well then this becomes an immediate
reward prediction problem
and we are not supposed to have only one
prediction we're making so we could have
n equivalents each corresponding for
instance to one of the state variables
or one of the features
and if we predict these under the agent
policy with a discount of zero then this
becomes the next state prediction
problem
and of course we can you know we can go
on and on we can consider many different
cumulatives many different horizons many
different hypothetical behaviors on
which to predict these cumulatives and
the beautiful thing about the gbf
frameworks is that it allows us to just
represent this very rich knowledge about
the world
but learning all of these predictions
with the same mechanism so for instance
we could use the same td algorithms that
we reserved to
to predict make value predictions for
the main task reward to predict any of
these
and the main problem then becomes not
how to learn such knowledge which
sometimes we can address with the
standard rl but how do we use these
predictions this rich knowledge about
the world to provide our agents with
good representations that can support
fast learning effective generalization
and all the things that we are after
one one beautifully simple approach
would be to use the gbf's predictions
directly as representations and this is
what is called predictive state
representation or psr
and it's based on the argument that for
a
large sufficiently large and specially
diverse set of gbf predictions these
will be sufficient statistics for any
other predictions that we might want to
make including for instance the value
predictions for the main task award
therefore we can use the gdf predictions
themselves as state features and then
learn values or policies uh for the main
task as linear functions of these
predictions
and this actually has a number of
appealing properties and so i would
encourage you to read the paper linked
at the top
to learn more about this but it's also
not the only way of using gbf
predictions
another option for how to use gdfs for
learning state representations is to use
them as auxiliary tasks
so this use of gps resembles a number of
techniques from supervised learning um
where forms of auxiliary tasks have been
introduced to help with representation
learning so i think for instance to
the self-supervised learning objectives
that are common in computer vision and
this approach has the advantage of being
especially well suited i think to be rl
agents because of the because the
compositional nature of neural networks
allows to combine auxiliary predictions
with the main task predictions with
relative ease
when we use gbfs as auxiliary tasks the
way we typically do so is by sharing the
bottom part of a neural network that
we're using as function approximator
between the main task prediction so for
instance a policy prediction or a value
prediction and all of the auxiliary gpf
predictions
and by doing so what what happens is
that both the main tasks and auxiliary
predictions become a function of a
single shared hidden representation
and then both the shared and unshared
parameters can be optimized by
minimizing jointly the losses associated
with both types of predictions
and the result is that the hidden share
representation is forced to become more
robust and more general and encode more
about the world
so this this specific way of using gbfs
as auxiliary tasks was for instance
implemented in the unreal agent it was
introduced by jadabarg and a few others
a few years ago
so in
unreal a neural network is used to map
input observations to both a value
prediction and a policy prediction
because it's based on a
on a fairly standard act operating
system but additionally a 400 distinct
cumulants are constructed
as the average change in intensity of
pixels between consecutive observations
and then an additional head is connected
to the hidden representation from which
we make value and policy predictions
up just after the the end of the
convolutional stack and this auxiliary
head is then trained to predict gdfs for
each of these 400 humans
and these auxiliary losses are then
referred to as pixel control losses and
are just summed to the standard policy
and value losses and everything is
optimized end-to-end
the same kind of system can also be
applied if we if the observations are
not images or if the
observations themselves were too big to
constitute to use them directly to
construct this this large number of
cumulatives so for instance in the same
paper they introduce another related gpf
based auxiliary task which is what they
called feature control again it works by
constructing a large number of
cumulatives but these are computed as
the differences between the activations
in the network itself between
consecutive steps instead of being
differences in the intensity of pixels
but
similar to pixel control once we have
this large set of cumulatives however we
derived them we then
they they just learn the gbfs associated
to each of these kilometers by having an
auxiliary prediction head again that
shares the bottom convolutional stack
with the main task policy and the main
task values but then is
is forced to also support this
additional auxiliary tasks and therefore
learn a richer more
effective representation basically
it may seem as a small change but
actually making these gbf predictions
either in the pixel control type or the
feature control type did make a real
difference so for instance in this plot
taken from the unreal paper you can see
how using these auxiliary predictions so
the blue line labeled unreal in this
plot delivered a huge improvement in the
raw performance of an actor critic
system on a suite of challenging
navigation tasks
and this is despite these auxiliary
predictions only affecting the rest of
the system through the improvement in
representational learning so for
instance we're not using them
to implement
pcr for instance or for anything else
except
propagating gradients and updates into
this shared representation
and it might seem surprising at first
that how can make all this disturbance
difference after all the cumulatives
that we are predicting uh these gps for
don't seem to encode a particularly
useful type of knowledge it may actually
seem somewhat arbitrary and contrived
and particularly in the pixel control
setting
and
and even worse we're not actually making
use of the resulting predictions in any
way but but if you consider for instance
pixel control making these predictions
regarding the variation in intensity of
pixels between consecutive observations
actually requires the agent to
understand many non-trivial aspects
about the environment
for instance how the agent actions
affect the location of objects in the
field of view of the agent
so even though they may seem you know a
bit contrived that these representation
actually
are forced their representation to
encode a lot of useful knowledge and and
that is why they're likely to then make
their implementation more useful even
for other tasks
and this i want to stress is
particularly important and particularly
particularly useful in settings where
maybe the main task reward itself is
actually sparse and would therefore
provide very little signal at least
early in training
while in contrast these auxiliary
cumulatives that were constructed are
different features or from observations
can provide a very dense learning signal
and then it can help kind of bootstrap
representation learning even before the
agent has had the chance to see the
first reward for instance and then when
it does actually see these rewards it
can pick up on these more quickly and
then learn much more effectively
to understand though why using gbfs as
your predictions
is actually useful it's it's maybe worth
thinking uh what happens to the feature
representation to try to understand what
is the actual effect of this
so
let's start with the linear function
approximation case so this is the plot
on the left and here we see that what
happens in our unfortunate learning
updates or to some value functions
when we're doing linear function
approximation is that we have some fixed
feature representation and we construct
a target value function using a suitable
operator for instance a one step
temporal difference operator and then
this target value is projected onto the
space that you can actually represent
under the fixed speaker representation
and the parameters are updated
accordingly in deep reform learning this
is the plot in the middle we have a more
complex phenomenon again we construct
some sort of target value function using
a suitable operator
but then we project
on the space of value that we can
represent not under the original feature
representation but a new one that is
updated to habit support as well as
possible this new value target
so we have both we're both changing the
the final value predictions
but also we're changing the
representation itself to support these
value predictions
so what happens when we add auxiliary
gbf predictions
like the ones that we discussed with
pixel control or feature control but
what happens is that we're regularizing
the second step
so we are preventing the representation
from becoming overly specific to the
current value predictions
and what we find at least empirically is
that this regularization does seem to
help quite a lot
but by itself this interpretation while
it helps understand what happens when
when we use gvs as a clear task in a way
it maybe raises a bit more questions
than answers because after all isn't it
desirable for the representation to be
updated to support
the value predictions as well as
possible so why should regularizing the
representation help in the first place
and if it does
which gdfs would provide the best
regularization
so let's try to answer these uh one at a
time so the first thing to keep in mind
to understand why using gbfs to
regularize representation is useful in
the first place is that over the course
of learning we will actually need to
approximate many value functions
this is because if we're doing our job
properly in rl the agents will get
better over time therefore both our data
distribution and our value predictions
even in the same states that we have
seen in the past change as the agent's
behavior changes
and this means that we want a
representation that can support not just
good value predictions right now but it
can support approximating all all of the
value functions in a way that
that on this path in in value space that
goes from the initial values of the
initial policy all the way to the values
of the optimal policy
regulation can help us achieve this by
preventing the representation from
overfitting
this
where
about
as useful as effective
so consider the space
will be depicted on this slide
that corresponds to the values of all
the various policies that agents pass
through over the course of training so
[Music]
so to understand
how the different choices of gps will
affect us
will will affect the learning and it
will
make the representation more or less
effective we need to look at how the
choices of the target policies and
humans affects the representation
and the and how this
interacts with all of the elements that
we just defined so starting from left to
the first step
here representation learning is only
driven by accurately predicting the
current value
so in this case there is nothing to
force the representation to be well
aligned with any other function except
for a parameter
so here we have a vector
but these correspond to humans that are
different from the main parts of the
world
this means that their value actually
lives outside
again also in this case there's actually
no strong reason to believe that
regularization will
[Music]
for instance
and these actually correspond exactly to
this second plot
but it's good to realize there is a
stronger report it might be sometimes
higher
the third platform captures instead of
the case where we use gdf absolutely
to predict
the main tax reward over a fifteen set
of different target policies that are
different foundations
so now the representation is forced
to support a range of values within the
polytope and given the geometric
structure of this space of value
function it actually can be shown that
for a suitable set and choice of
policies the news representation will
capture the principal components of the
value polytop and therefore we provide
good supports to approximating values in
the polysome and including the ones in
the valley improvement path but
unfortunately the exact solution to
construct the right set of policy is
computationally interactable
so in the final plot we show a concrete
approach to picking these policies in a
way that is instead tractable and the
intuition here is that
the actual value improvement path so the
set of values that we will care to
predict during the course of learning is
actually much smaller than whole
polytope of all the possible value
functions for all possible policies
so
maybe we should just target the values
on this path
and
at each point in time
while the future policies on the path
are known we do have already at least
passed through
some sequence of policies and associated
values during training this is
basically a sequence of policies and
values that we have been predicting so
far
so rather than picking the policies
arbitrarily we could take advantage of
the trajectory
to pick
a selective of a selection of policies
as least aligned with the value
improvement path up to the current
moment and by picking these uh past
policies as
the the policies to use as targets in
our auxiliary gps then we don't
guarantee that these policies will
induce a representation that optimally
supports future values but at least it
must support well
um the values on a subset of the value
improvement path and it provides us both
an informed choice that is at least
reasonable and
in a choice that is computational
attractable because we have access to
these because we went through these
policies during training
and indeed this choice of gdf's
auxiliary tasks was actually found to
perform the best among all the choices
that we discussed in a recent empirical
study
learning about multiple gps as an
auxiliary task is basically turning
agent learning into a multitask problem
because we're now jointly training a
shared representation to support many
predictions that we can see as different
tasks
this is great for all the reasons that
we discussed so far but you can also
introduce a few challenges
so
when we want to our agents to learn as
much as possible about the world and
make all of these additional predictions
we we need to face the fact that we only
have limited resources so we have
limited memory we have limited
representation capacity computation and
so on
so different tasks will always find
themselves competing with each other for
these shared resources so any concrete
system will actually need to define some
way of trading off these competing
domains
and
the important thing to realize is that
there is always a trade-off so even if
you don't make it explicit even if you
don't do anything
fancy about it
then the system will make some trade-off
so for instance the magnitude of the
predictions and the induced gradients
for for different tasks will be
different for different gbf predictions
and this magnitude will scale linearly
with the frequency and the size of the
individual cumulant so it will be quite
different across predictions
this means that the updates from the
different predictions will basically
be re-weighted accordingly
in terms of how much they contribute to
the to shaping the shared parameters
so if we actually want these trade-offs
to be sensible we need to think about
them because otherwise we'll just be
making some trade-offs but these might
not be the trade-offs that we actually
want for our agents
understand how
important and also how difficult the
task is
and how much the magnitude of the
gradient can actually differ when making
different types of predictions i think
it's good to consider the graph in this
slide so these three plots were
generated by showing the gradient norms
during training of a value-based agent
on different atari games for three
different types of agents
so the different atari games here
constitute different tasks that you
might make value predictions for
and in all the three plots the lines
correspond to different percentiles of
the magnitude of the gradient norm
so this means that the width of these
distribution gives you an idea of how
diverse gradient magnets can be across
different tasks of different predictions
in this case the values of predicting
the values in different entire games
and what you see is that on the on the
left and this is vanilla q learning the
magnitude of the gradients actually
spans eight orders of magnitude
depending on which task you're in
with basically gradient norms ranging
from ten to the minus two to greater
norms in the order of the millions
and in the second plot we show what
happens if the individual rewards are
clipped to a small minus one to plus one
range and then again vanilla q learning
is applied so this reduces the range but
is it is important to see that the grain
norms actually still span almost four
orders of magnitude and this is because
it's not just the size of the individual
cumulants that you're predicting in a
different task that counts even if the
individual rewards are of a similar size
the frequency of these rewards will be
different between tasks and the gradient
magnitude actually scales with the value
magnesium not the magnesium individual
reward
and furthermore if we look at even at
the individual tasks um the grading
magnitude actually changes during the
course of training because they as the
agent's behavior changes and the number
and the size of rewards that the agent
collects changes so does the magnitude
of the updates
and this is already a problem if you're
training our leap rl agents on
individual tasks because you can imagine
how hard for instance it can be to tune
hyper parameters so the learning
dynamics can be so different across
tasks
but it also means that any naive
multitask prediction problem
such as predicting auxiliary gps
will
will be really hard to get right unless
you do something to con to control how
you're trading off the demands from
different tasks
because ideally what we would want is
that across the different tasks that
we're making gradients look like in the
third plot on the slides the green plot
here you can see that across all of
these prediction tests the gradient
magnets are actually confined within a
reasonably small range and this means
that we can then be explicit since we
can assume that the gradients themselves
have a similar magnitude then we can
choose explicitly how we trade off
between the different tasks we could
just assign an equal weight in which
case given that the gradients have been
equals an equal magnitude then they
would equally shape the representation
or we can choose for instance to
you know put a bigger weight on some
tasks which we consider our main task
and treat the others as auxiliary tests
and maybe
contributes to shaping their
presentation but with a smaller weight
the problem is how do we get there how
do we get to the point where our
gradient updates have a comparable
magnitude across all the many different
predictions and all the many different
tasks that we their agent could be
trained to to to learn
so
the way we get there is by using what is
called an algorithm and we're using an
algorithm that is called pop art which
so that those plots are actually
generated by running a vanilla learning
algorithm but with pop art on top
so but before i delve into
exactly how this algorithm works um it's
good to discuss another thing which is
if the issues can be so dramatic when
training the parallel systems to make
different predictions why isn't it
usually discussed in supervised learning
because it also in supervised learning
we sometimes use a multitask system and
the reason is that in supervised
learning we typically assume fixed data
sets
and this means that we can easily
normalize both inputs and targets across
the entire dataset for any number of
target variables that we want to predict
and everything will be always be well
behaved
and this is actually what we do in
supervised learning we just don't even
think much about it but we where we
normalize variables because before
feeding it into a deep learning system
because it's such a trivial
preprocessing that it doesn't require
much thought but the problem is that in
reinforcement learning we do not have
access to a full dataset and the scale
of prediction is even known stationary
so it even changes over time
which means that any normalization
scheme will need to be adaptive to ch
and
to to always normalize appropriately
across the duration of training
and this is a much more complicated
system and if that requires to actually
think deeply about what we're doing
luckily this problem was already
addressed so for instance
there are a few different ideas that we
propose in the literature but one that i
want to discuss today is is the pop art
algorithm from the plot in a few slides
ago so this was introduced by hado and
me a few years ago and the algorithm
works in two steps
so the first step is what is called
adaptive target normalization so
consider any one prediction that you're
making so for instance one of the gvs
on each update you will typically
observe some targets for that prediction
could be for instance a q learning
target which you construct you can
construct for whatever cumulative you're
learning a gbf for
then what you
what you can do with pop art is to
normalize this target adaptively by
keeping track of the first moment mu and
the second moment new of the targets for
that prediction so for instance by doing
some exponential moving average of the
targets associated to one of the gps
and then you can update the network
outputs to match not the original target
but on each step use a normalized target
that is constructed from the target for
instance a q learning target by
subtracting the first moment and
dividing by the variance sigma which is
estimated from the first and second
moment by just subtracting the square of
mu from nu
and this will basically provide you a
gradient update that is much better
behaved in terms of magnitudes
irrespectively of what is the size of
the rewards what is the frequency of the
rewards and so on and this means that if
you apply this normalization
independently to each gdf the gradients
that apply that you will apply to the
shared parameters of the of the network
will contribute equally to the shared
representation instead of having what
one or more of the of the auxiliary
predictions uh dominates the entire
learning process
importantly when doing this kind of
normalization you can still recover the
unnormalized q values by just
multiplying the network outputs which
are trained in this normalized space by
these statistics sigma and mu
and this is important because you
actually need the unnormalized values in
certain circumstances for instance to
construct the targets via bootstrapping
the problem with the adaptive
normalization as we just discussed it
here is that
every time you update the normalization
statistics which we typically do on each
step because we're just keeping try a
moving average you're actually
normalizing the update in the current
state but you're inadvertently changing
it on normalized agent predictions in
all other states and this doesn't seem
good
because there is no reason for
indiscriminately changing the value of
all other totally unrelated states
and also
it's not seen
not only seems you know a bit fishy but
it's also completely ad for instance
non-stationary which we we know can make
life harder for our prediction
algorithms
but luckily we can actually prevent this
from happening at all with a very simple
trick which i'll i'll discuss in this
slide this is a based on the on the
observation that most neural networks
typically have a final fully connected
or otherwise linear layer at the very
end so you can effectively write the
network output as a linear transform of
the activations of the last hidden layer
so the normalized q values will
typically be some matrix w times v
plus a bytes vector b for a suitable wnb
and a suitable hidden representation v
which
in general will include any number of
non-linear layers for instance some
convolution layers with some reload
activations and so on
the insight from pop art is that every
time you change normalization statistics
you can actually undo this change in
terms of the unnormalized predictions by
making it the reverse in a reverse
update to the weights and biases of the
last layer and this can actually be done
with a very simple formally formula in
an exact way
the way the way popar does it is by
multiplying the weight matrix by the
ratio between the old and the new scale
factor and
updating the bias as well with this
slightly more complicated expression we
just showed on this slide
if you do this then we get the best of
both worlds because on each step we can
still normalize the targets in our
gradient updates as in the previous
slides but the unnormalized predictions
that we use for instance for
bootstrapping are not affected by this
continuous change of this normalization
statistics and this prevents any
instabilities
this merge has been actually very
successful in the past so for instance
in this plot
we you can see what happens if you train
a single agent to make value and policy
predictions for 57 different atari games
with all these predictions sharing the
bottom layer of the network
the version with pop art is the one
shown in orange you can see how it
performs
really much better than any naive
baseline that does not normalize the
updates for the different tasks so the
orange line actually gets to above human
performance in aggregate acro across the
57 games while the other baselines
actually struggle to reach even 60
defense 60 of human performance
but
the well this this plot shows
specifically for the case of tyree it's
important to notice that the approach is
in no way specific to atari or the
specific multitask setting and can be
used whenever you want multiple
predictions to use a shared
representation but you want to trade off
in in a sensible way their relative
contributions as for instance in a gbf
based auxiliary task scenario that we
discussed in the previous slide
in this section i want to discuss a few
more advanced topics in gdf learning
that we don't quite know yet how to
tackle in order for
still a very active arab research
and i won't go into much detail of the
specific solutions as the specific
methods used to address these issues
will likely change as our understanding
of these problems improves instead i
will give you a brief overview of what
these problems are and just this neat
peak of what the frontier of research
looks like in this
area the first important topic that we
need to make progress on to really scale
these ideas up is of policy learning so
in something we discussed so far
we already have multiple auxiliary gpfs
that are only used as auxiliary
and are learned from prediction
experience that is generated by a
different main task policy
since the gps
might refer to a different target policy
in a different cube and a different
discount learning these auxiliary tasks
already
will require of policy learning
and as you know from the previous
lecture we we have some tools to deal
with of pulse learning but the reason i
consider this still an open problem is
that the degree of policiness that you
might face in this setting where you
might you're striving to learn like this
rich knowledge about the world and many
many different predictions from a single
stream of experience the degree of
policiness that you face here might be
quite extreme and so i really think we
will need fundamental improvements to
our policy methods to really succeed in
learning this fully diverse predictions
about the world as auxiliary tasks
and the another reason of policy
learning is interesting is that in the
context of gdf learning it's not only a
challenge an obstacle to overcome but
it's actually also potentially an
opportunity because if we are predicting
values for many different humans policy
and discounts we could for instance use
the additional predictions not just as
auxiliary tasks but to generate
experience that is more diverse and
provide an excellent form of exploration
even for learning some main task policy
but how to best do so is still an open
problem and again will require
improvements also in in the
how well our methods can cope with
wildly of policy data
still even though it is still a problem
for we have at least
some proof of concepts that this ideas
can work and can provide meaningful
improvements for instance in a unicorn
paper from a couple of years ago we
showed how a multi-task system learning
about many tasks of varying difficulty
but sharing experience
between all these tasks so that each
each prediction was learned off policies
from data generated from all the from
behavior that was induced by all other
predictions
then this sharing could be allowed to
solve
certain heart problems that were
impossible if you are only striving to
optimize for the hardest task
another important problem that might
need to be revisited in the context of
gps learning is generalization
so far we have treated gbfs as discrete
sets of predictions
that potentially share a hidden
representation but are otherwise learned
independently and
but how do we scale this to
thousands or millions of predictions
learning about all of them independently
might not actually be that effective so
just like learning about the value of
each state in mdp was not very effective
so several people and are quite excited
about investigating whether we can use a
similar approach used to learn values in
large state spaces
and try to generalize what we learn
about one gbc to other related gvc in
some large space of predictions and
problems
one concrete approach to doing this is
to feed the representation some
representation of accumulant or
discounts that we wish to make a
prediction for as additional inputs to a
single network that makes predictions
for all gbs we're interested in
so instead of having a network that only
takes one state and outputs multiple
independent predictions
this would be a network that takes the
the
representation of which predictions you
are required to make as inputs and then
basically exposes do generalization
both states and goals and tasks and
cumulatives and discounts
by using the same function
approximations uh two techniques that we
have used to generalize across states
so this kind of gbfs where we attempt to
generalize across different predictions
are referred as universal value
functions that are actually a very
exciting arab research and deeper
enforcement learning
the third important open problem they
want to mention is discovery so this is
the problem where do gbfs come from even
if we know how to learn about all of
these of policy even if we know how to
generalize across many different
predictions where do the predictions
themselves come from how do we pick
which gbs to learn about so the previous
section we discussed that there are
many different ways we can construct gps
right we can
construct pistol-based human lens we can
build feature based humans we can
predict the main task reward under
previous policies of the agents
but
while
many of these work in practice and while
specifically at least the value
improvement path interpretation actually
gives us at least a relatively
principled way of picking gps the
research in
how to choose what to learn about is
really really far from concluded so
among the recent approaches are quite
different from what we discussed
supplier i want to briefly mention at
least one that we introduced with a few
colleagues including haddo in the paper
discovery of useful questions as
auxiliary tasks
and here we proposed that
maybe we should learn from experience
what are the useful gbs
that our agents should learn about
and specifically we propose to do so by
parameterizing the cumulus and the
discounts that we want to learn about
as neural networks and then use a form
of metal learning called metagradients
to kind of discover online what
questions our agents should ask about
the world and then try to learn
um online while it's learning about the
main task for instance and this actually
resulted in quite a nice performance
gains
in atari for instance
the final topic for today is what is
referred to as distributional
reinforcement already
no are discussions so far if you think
about it gbfs were still representing
predictive knowledge but in the form of
expectations so expectations of the
cumulative discounted sum of some
scholar quantity as usual
another approach that has been proposed
is to
instead move towards learning
distribution returns instead of expected
value so this generalizes the usual
prediction problem in a different
direction so instead of
changing the cumulative or the discount
or the target policy that we're making
predictions about it changes the type of
prediction that we make so that we
predict not expected values but full
distributions of return
so while we generalize in a different
direction though it's good to realize
that similarly to how predicting many
gdfs can help with representation
learning by providing some auxiliary
task effect learning distributions
instead of expected values could also
provide a richer signal that could
result in better and more robust
learning
however there is an important
distinction between these two approaches
while we
when we for instance are learning gbfs
as in the methods from the as in the
previous slides we can reuse
the same algorithms from
you know the previous lectures from
since the beginning of the course just
apply them to different humans the
problem of learning return distributions
actually requires to expand extend our
temporal difference algorithms in quite
interesting ways
and there's several concrete approaches
that have been introduced in recent
years but i'll discuss just a couple to
give you at least a feel of how you can
change temporal difference algorithm
studio with distributions instead of
expectations
so the first instance that i want to
talk about is what is
called the categorical djin agent
and the objective of this agent is to
learn a categorical approximation of the
true return distribution
so how do we do this well
first the
agent needs to define
some some fixed combo distribution to
act as a support
for
expressing the categorical approximation
of the returns so for instance
we might allow the return to assume any
fixed value between minus 10 minus 9.9
minus 9 plus 8 9.7 all the way up to
plus 10.
and then what we might do is we use a
neural network to output now not the
expected value as we would do
traditionally but a vector of
probabilities associated to each element
of this support so that you can still
recover the expected value by for
instance computing the dot products
between the fixed supports
this com distribution that we have
defined and the network probabilities
through predictions
important that you can still
recover the expected value because it
means that this is a strict
generalization so we could for instance
still do
um what we
do traditionally for selecting actions
so we could for instance still select
actions according to 3d policy with
respect by choosing the action with the
highest expected value
but importantly the way we learn these
values has now changed because instead
of learning an expected value we have to
learn a suitable probability predictions
over a fixed categorical support
so how do we do this how do we update
probabilities that the probabilities
that we associate to each possible value
of the return
well turns out that our temporal
difference algorithms can actually be
extended to this distributional setting
in a relatively clean way
so let's look into that this way so as
usual we consider a transition so
a tuple consisting of a status t or a
word rt plus one and this kept gamma in
the next state as t
plus one
what we can do then is to
take the predict the network predictions
in sd plus one this will provide in some
sense our bootstrapping
and
but take the support of these
predictions and shrink it by the
discount gamma and shift it by the
reward r two plus one
then this
transform the distributions will co will
be a reasonable target for our predicted
probabilities in the previous state as t
this is a really vanilla transposition
of how bootstrapping works for expected
value but with an important cabinet and
the cavity is that when we shrink and
shift to the support of the distribution
that we are bootstrapping from
well the support doesn't match anymore
the support of the distribution that we
want to update the one in the previous
state st so how do we update the
probabilities to match
this these two distributions well what
we need is an additional step which is
to project the new support onto the
support
that we are making predictions for
allo reallocating basically a
probability has to minimize this
projector error projection error and
then at that point we can just take the
kl between the two distributions so the
predicted distribution in sd and the
shrink
and
the shifted distribution in the next
state and just minimizes kl to update
the probabilities in sd
so this actually was shown to work
really well in practice and has as a
result it's kind of sparked a whole lot
of new research in this area because
um focusing either on how to use these
distributions so can we do more than
just provide a richer learning signal
and also a lot of research on how we
could alternatively represent and learn
distributions of research
so
ideally to go beyond the somewhat crude
categorical approximation that i just
described
so for instance just recently quantile
regression was proposed as a way to kind
of transpose the parameterization from
categorical dqm
so that instead of adjusting the
probabilities of fixed to ports we
instead adjust the support associated to
fixed setup probabilities
so this provides often better result
because the support can now move around
to approximate the distribution as well
as possible and it's not constrained to
a to a fixed range
uh that is arbitrarily defined at the
beginning
and the
and this means that it
is strictly more flexible because the
categorical approximation could instead
be quite sensitive to the choice of the
bounds of the fixed support
and then there you of course there are
more extensions you could think of you
could maybe adjust both probabilities
and the support and there is a lot of
ongoing research on this problem that i
think is
quite exciting and interesting |
69bf7166-75b8-4c9d-85f0-aff978cc69c0 | trentmkelly/LessWrong-43k | LessWrong | [Event] Meeting in Myrhorod, November 16
On November, 16th (Saturday) we go to Myrhorod (Poltava region) and meet with LW readers from Kharkiv and maybe other places, too. The attendance in Kyiv is low enough to do something crazy without worrying too much.
The meetup is at 16.00, unless Intercity is late, near Gogol's statue at the railway station.
We shall go from there somewhere more comfortable. It will be all the more interesting since nobody who's said they'd come knows a comfortable place there. As usual, you can reach me at chernyshenko123@gmail dot com, +38097-667-29-70, Marichka, but if you bring people there, please do everything to help them return home safely and share your own contact information. Given the political situation, be brave. Write me ASAP if you set out from Kyiv and want to come together, I'll be taking a train.
We shall only have several hours and probably some people will have to leave earlier than others, which means we might want to just hang out and introduce ourselves. The coolest outcome I hope for is for Kharkiv to start their own meetup afterwards, but the sky is not the limit. I also have some hopes for Odessa, although not in the near future.
Likely it will be cold, so charge your phones, take enough money to eat at least twice and bundle up. Do not hesitate to tell me directly that you need whatever, whenever it comes up.
And if you think you are not the typical LWer... does it signify so much about you? |
c8df7237-4344-444c-960c-7c5f3f8a259d | StampyAI/alignment-research-dataset/blogs | Blogs | freedom and diversity in Albion's Seed
freedom and diversity in Albion's Seed
--------------------------------------
considering my interest for america and for human cultures in general, ever since reading [the Slate Star Codex review of Albion's seed](https://slatestarcodex.com/2016/04/27/book-review-albions-seed/) i'd been meaning to read the whole thing (as has happened [several](https://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/) [previous](https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/) [times](https://slatestarcodex.com/2019/10/14/book-review-against-the-grain/)).
i was not disappointed, but the main two takeaways i got from this fascinating book about the four main early british colonial cultures in america are only tangentially related to america or the british; they are about freedom and diversity, great topics of fascination, as well as intrinsic valuing, for me.
for a general idea of what the book is about, you might want to read that Slate Star Codex book review before reading this post.
### freedom
the book consists of four sections (one for each of the main cultures of early british colonies in america), each consisting of a sequence of parts going over how each of those cultures relate to a variety of aspects: geographical and socioeconomic origin in the british isles, food, clothing, religion, architecture, life, death, time, magic, marriage, sex, politics, etc…
the last part of each section is about how each of those four cultures views freedom. it's particularly interesting because the book seems to be making a point about how those four radically different visions of freedom contributed to the general modern american understanding of freedom: it is a pluralist view where many people have different meanings about what freedom means to them.
in fact, the book contains a conclusion after the four main sections, whose very last part is about this very notion: cultural views on freedom in america, and how they've been contributed to by those four cultures.
i don't think it can just be reduced to "actually they're four different cultural values that all have the word freedom or liberty attached to them", either: there does seem to be some freedom-ey core invariant to all four visions, even if it takes more effort to see it in some.
this makes me pessimistic about [trying to come up with a single unifying definition](defining-freedom.html), but maybe that's to be expected: [value is complicated and fragile](https://www.readthesequences.com/Value-Is-Fragile) after all, and the scope of "freedom" in human caring has been particularly big. indeed, look into history and you'll find numerous peoples from all kinds of cultures describe what they're fighting for as "freedom", and there probly *is* a way to understand those as still a perspective on some essence of freedom if one is open-minded enough, even if it's hard to pin down what that essence is.
### diversity
a friend of mine once pointed out how in the video game Mass Effect, the difference between humans and other *alien* cultures, who have spent almost all their existence on *completely different planets*, are lesser in the game than differences *between human populations, on the earth, in real life, right now*.
this isn't a point about how those aliens look, though it may be part of it: it's largely a point about how they talk, how they think, how they view the world and transmit knowledge, etc…
in addition, when i talk to people, i see them make what seem to me like insane underestimatings of human diversity. "if someone does this, then they'll say this"; "if this happens to a people, they will do this"; "people would enjoy a single society like this"; and so on. as for me, i've come to increasingly believe that the breadth of human diversity is immense, and that very few assumptions can be held about how a population, let alone a person, can think, or act, or react, in general — almost all such assumptions are bound to be anchored in the culture of whichever local culture the person making those claims is from. this is kind of akin to what happened when linguistics discovered languages like [Pirahã](https://en.wikipedia.org/wiki/Pirah%C3%A3) that wildly break assumptions about invariants in human language — there are some invariants we should believe in still, but they are much lesser than what we originally assumed.
Albion's Seed makes a great case study in diversity, and has become my go-to example for it: all four of the cultures depicted are broadly protestant british peoples existing at about the same time period, and yet their historical and environmental circumstances makes them have such a different cultural core than when they move to america and are able to implement their culture and lifestyle to a much greater extent, the results end up being wildly different and alien to one another.
to point out individual differences would be underselling the sheer scope of their quantity, so i'll just ask you to read the Slate Star Codex book review for an idea of just how much these four cultures differed. and again, all that differences is *just* within four protestant british peoples from the era of colonialism in america! imagine what it must be on the whole of earth, or what it *could* be once we multiply beyond earth (whether that be in space or in some uploaded form). |
49c34fc5-c4e2-4407-8681-b4c21c8c33c3 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Durham HPMoR Discussion, chapters 27-29
Discussion article for the meetup : Durham HPMoR Discussion, chapters 27-29
WHEN: 12 January 2013 11:00:00AM (-0500)
WHERE: Nanataco, 2512 University Dr, Durham NC
We'll be meeting at Nanataco at 11:00 for brunch to discuss HPMoR chapters 27-29.
As always, please feel free to join us even if you haven't read the chapters.
Please RSVP here or on the mailing list so I can know how large a table we should ask for.
Discussion article for the meetup : Durham HPMoR Discussion, chapters 27-29 |
f34f1329-6d2c-46de-adc7-50ba699cb875 | trentmkelly/LessWrong-43k | LessWrong | Discussion: LLaMA Leak & Whistleblowing in pre-AGI era
It was reported that Meta's LLaMA models were leaked, with someone adding a PR with the magnet link into their official repository.
Now the public has access to the model that is apparently as powerful or even more powerful as GPT-3 on most of the benchmarks.
Is this a good or a bad event for humanity?
Are powerful models better being kept behind closed doors used only by the corporations that had produced them, or does the public having access to it evens out the playing field, despite the potential misuse by bad actors?
Should this continue to happen, and should there be our own Snowdens in the AI field, whistleblowing if they notice something that is in the public interest to be known?
What if they work at Large Corporation X, and they believe the first AGI had been invented? Is it better for humanity that the AGI is solely used by that CEO for the next five years ( / the board of directors / the ultra-rich that are able to pay billions of dollars to that AI company for exclusive use), amassing as much power as possible until they monopolize not just the industry, but potentially the whole world, or is leaking the AGI weights to the public the lesser of two evils and is in fact a moral responsibility, where the whole humanity is upgraded to having AGI capabilities instead of one person or a small group of people?
Let's discuss. |
5a9cdbd4-2352-4a89-944a-7658ff9d4ea6 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Knowledge is not just mutual information
*Financial status: This is independent research. I welcome* [*financial support*](https://www.alexflint.io/donate.html) *to make further posts like this possible.*
*Epistemic status: This is in-progress thinking.*
---
This post is part of a sequence on the accumulation of knowledge. Our goal is to [articulate what it means for knowledge to accumulate within a physical system](https://www.lesswrong.com/s/H6kiZXJwYgxZubtmD/p/YdxG2D3bvG5YsuHpG).
The challenge is this: given a closed physical system, if I point to a region and tell you that knowledge is accumulating in this region, how would you test my claim? What are the physical characteristics of the accumulation of knowledge? We do not take some agent as the fundamental starting point but instead take a mechanistic physical system as the starting point, and look for a definition of knowledge in terms of physical patterns.
The previous post looked at measuring the resemblance between some region and its environment as a possible definition of knowledge and found that it was not able to account for the range of possible representations of knowledge. This post will explore mutual information between a region within a system and the remainder of the system as a definition of the accumulation of knowledge.

Formally, the mutual information between two objects is the gap between the entropy of the two objects considered as a whole, and the sum of the entropy of the two objects considered separately. If knowing the configuration of one object tells us nothing about the configuration of the other object, then the entropy of the whole will be exactly equal to the sum of the entropy of the parts, meaning there is no gap, in which case the mutual information between the two objects is zero. To the extent that knowing the configuration of one object tells us something about the configuration of the other, the mutual information between them is greater than zero. Specifically, if we would have had to ask some number N of yes-or-no questions to identify the configuration of the environment without any knowledge of the configuration of the region of interest, and if knowing the configuration of the region of interest reduces the number of yes-or-no questions we need to ask by M, then we say that there are "M bits" of mutual information between the region of interest and its environment.
Mutual information is usually defined in terms of two variables whose exact values are unknown but over which we have probability distributions. In this post, the two variables are the physical configuration of the region of interest and the physical configuration of the environment. Looking at things in terms of physical objects is important because I want to be able to examine, say, a physical region within a shipping container or a subregion of a cellular automata and discern the accumulation of knowledge without having any a priori understanding of where the "agents" or "computers" or "beliefs" are within the system. The only structure I’m willing to take for granted is the physical state of the system and some region of interest that we are investigating as a possible site of knowledge accumulation.
It is not possible to look at a single snapshot of two objects and compute the mutual information between them. Mutual information is defined with respect to probability distributions over configurations, not with respect to individual configurations. What we really want to do is to run many simulations of our system, and build up a probability distribution describing how our region of interest is configured in comparison to how the environment is configured, and compute the mutual information between these probability distributions.
**Example: Computer finding an object**
Suppose there is a computer with a camera in the shipping container that is programmed to scan the shipping container and find a certain object, then record its location within its memory. We could set up the shipping container many times with the object in different locations, and allow the computer to find the object each time. After however long it takes for the computer to complete its scan of the shipping container and store the location of the object in memory, the mutual information between the computer and its environment will have increased. We will be able to measure this increase in mutual information no matter how the computer represents the position of the object. We could in principle compute mutual information using just the physical configuration of the computer, without knowing that it is a computer, since the representation of the position of the object in memory grounds out as the physical configuration of certain memory cells. It would take a lot of trial runs to build up enough samples to do this, but it could in principle be done.
**Counterexample: Computer case**
But now consider: the same photons that are incident upon the camera that the computer is using to find the object are also incident upon every other object that has visual line-of-sight to the object being sought. At the microscopic level, each photon that strikes the surface of an object might change the physical configuration of that object by exciting an electron or knocking out a covalent bond. Over time, the photons bouncing off the object being sought and striking other objects will leave an imprint in every one of those objects that will have high mutual information with the position of the object being sought. So then does the physical case in which the computer is housed have as much "knowledge" about the position of the object being sought as the computer itself?
It seems that mutual information does not take into account whether the information being accumulated is useful and accessible.
**Counterexample: Perfect self-knowledge**
In the setup above, the "environment" was the interior of the shipping container minus the region of interest. But we are also interested in entities that accumulate knowledge about themselves. For example, a computer that is using an electron microscope to build up a circuit diagram of its own CPU ought to be considered an example of the accumulation of knowledge. However, the mutual information between the computer and itself is always equal to the entropy of the computer and is therefore constant over time, since any variable always has perfect mutual information with itself. This is also true of the mutual information between the region of interest and the whole system: since the whole system includes the region of interest, the mutual information between the two is always equal to the entropy of the region of interest, since every bit of information we learn about the region of interest gives us exactly one bit of information about the whole system also.
It seems again that measuring mutual information does not take into account whether the information being accumulated is useful and accessible, because what we are interested in is knowledge that allows an entity to exert goal-directed influence over the future, and a rock, despite being "a perfect map of itself" in this sense, doesn’t exert goal-directed influence over the future.
**General problem: information is necessary but not sufficient for knowledge**
The accumulation of information within a region of interest seems to be a necessary but not sufficient condition for the accumulation of knowledge within that region. Measuring mutual information fails to account for the usefulness and accessibility that makes information into knowledge.
**Conclusion**
The accumulation of knowledge clearly does have a lot to do with mutual information, but it cannot be accounted for *just* as mutual information between the physical configuration of two parts of the system. The next post will explore digital abstraction layers, in which we group low-level configurations together and compute mutual information between high- and low-level configurations of the system. |
c7afd751-5418-4f58-b03f-fddcbb2d77ac | trentmkelly/LessWrong-43k | LessWrong | [Link] "The madness of reduced medical diagnostics" by Dynomight
Link:
* The madness of reduced medical diagnostics
This is (or seems to me now to be) an obvious-in-hindsight and I'm sad that I've never (or don't remember having) encountered it; at least not so succinctly.
I'd like to try putting this 'advice' into practice myself, e.g. demanding doctors share relevant base rates but not otherwise avoiding seeing a doctor at all or avoiding diagnostic tests (even if I expect the doctor's subsequent decisions to be bad). |
0adbeadc-efaa-4002-af03-4a67d45e1e16 | trentmkelly/LessWrong-43k | LessWrong | Interpreting a matrix-valued word embedding with a mathematically proven characterization of all optima
In this post, I shall first describe a new word embedding algorithm that I came up with called a matrix product optimized (MPO) word embedding, and I will prove a theorem that completely interprets this word embedding in the simplest case. While it is probably infeasible to mathematically characterize a word embedding with a mathematical proof when the corpus is something that one encounters in practice, this theorem should be a signal that such a word embedding (or a similar word embedding) should be interpretable and mathematical in other ways as well. This theorem also illustrates the way that MPO word embeddings should behave.
Unlike most word embedding algorithms, MPO word embeddings are matrix-valued so that they map tokens to matrices instead of simply mapping tokens to vectors. In our case, the matrices are not necessarily real matrices as they may be complex or even quaternionic matrices. MPO word embeddings also differ from other word embedding algorithms in that they are not constructed using neural networks though we still use gradient ascent.
Why MPO word embeddings?
Since tokens often have many meanings depending on context, it seems better to represent a token in a form where it is easy or easier to separate the individual meanings of a token. While vectors may be good for representing individual meanings of tokens, it is better to represent a polysemantic token as a matrix instead of a vector. If someone were to give me a task of interpreting a word embedding, I would be much happier if the word embedding were a matrix-valued word embedding that neatly organized each of the meanings of a polysemantic token into a matrix than if the word embedding were a vector-valued word embedding where each of the individual meanings of the token were awkwardly smushed together in a vector.
Spaces of matrices have additional structure that is lacking in vector spaces, and one can use this additional structure to analyze or interpret our word embedding. This add |
6a610e60-b69b-4911-b18e-5a774d00a6f6 | trentmkelly/LessWrong-43k | LessWrong | Rationality Quotes February 2012
Here's the new thread for posting quotes, with the usual rules:
* Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
* Do not quote yourself.
* Do not quote comments/posts on LW/OB.
* No more than 5 quotes per person per monthly thread, please. |
19be005c-26f4-4d0b-8d71-ed65b5f7e112 | trentmkelly/LessWrong-43k | LessWrong | HPMOR Q&A by Eliezer at Wrap Party in Berkeley [Transcription]
Transcribed from maxikov's posted videos.
Verbal filler removed for clarity.
Audience Laughter denoted with [L], Applause with [A]
----------------------------------------
Eliezer: So, any questions? Do we have a microphone for the audience?
Guy Offscreen: We don't have a microphone for the audience, have we?
Some Other Guy: We have this furry thing, wait, no that's not hooked up. Never mind.
Eliezer: Alright, come on over to the microphone.
Guy with 'Berkeley Lab' shirt: So, this question is sort of on behalf of the HPMOR subreddit. You say you don't give red herrings, but like... He's making faces at me like... [L] You say you don't give red herrings, but while he's sitting during in the Quidditch game thinking of who he can bring along, he stares at Cedric Diggory, and he's like, "He would be useful to have at my side!", and then he never shows up. Why was there not a Cedric Diggory?
Eliezer: The true Cedrics Diggory are inside all of our hearts. [L] And in the mirror. [L] And in Harry's glasses. [L] And, well, I mean the notion is, you're going to look at that and think, "Hey, he's going to bring along Cedric Diggory as a spare wand, and he's gonna die! Right?" And then, Lestath Lestrange shows up and it's supposed to be humorous, or something. I guess I can't do humor. [L]
Guy Dressed as a Witch: Does Quirrell's attitude towards reckless muggle scientists have anything to do with your attitude towards AI researchers that aren't you? [L]
Eliezer: That is unfair. There are at least a dozen safety conscious AI researchers on the face of the earth. [L] At least one of them is respected. [L] With that said, I mean if you have a version of Voldemort who is smart and seems to be going around killing muggleborns, and sort of pretty generally down on muggles... Like, why would anyone go around killing muggleborns? I mean, there's more than one rationalization you could apply to this situation, but the sort of obvious one is that you disapprove of th |
0a4b87d9-8a04-4490-9a32-7aefa738db1b | trentmkelly/LessWrong-43k | LessWrong | The Beauty and the Prince
This post will address a problem proposed by Radford Neal in his paper Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical conditioning. In particular, he defined this problem - The Beauty and the Prince - to argue against the halver solution to the Sleeping Beauty Problem. I don't think that this is ultimately a counter-example, but I decided to dedicate a post to it because I felt that it was quite persuasive when I first saw it. I'll limit the scope of this post to arguing that his analysis of the halver solution is incorrect and providing a correct analysis instead. I won't try to justify the halver solution as being philosophically correct as I plan to write another post on the Anthropic Principle later, just show how it applies here.
The Beauty and the Prince is just like the Sleeping Beauty Problem, but with a Prince who is also interviewed and memory-wiped. However, he is always interviewed on both Monday and Tuesday regardless of what the coin shows and he is told whether or not Sleeping Beauty is awake. If he is told that she is awake, what is the probability that the coin came up heads. The argument is that 3/4 times she will be awake and 1/4 times she is asleep so only 1/3 times when he is told she is awake will the coin be heads. Further, it seems that Sleeping Beauty should adopt the same odds as him. They both have the same information, so if he tells her the odds are 1/3, on what basis can she disagree? Further, she knows what he will say before he even asks her.
I want to propose that the Prince’s probability estimate as above is correct, but it is different from Sleeping Beauty’s. I think the key here is to realised that indexicals aren’t a part of standard probability, so we need to de-indexicalise the situation. However, we'll de-indexicalise the original problem first. We'll do this be ensuring that only one interview ever "counts", by which we mean that we will calculate the probability of events over the interviews that count. W |
34fe5029-30d1-4227-b877-f94207f0480b | trentmkelly/LessWrong-43k | LessWrong | Low Hanging fruit for buying a better life
What can I purchase with $100 that will be the best thing I can buy to make my life better?
I've decided to budget some regular money to improving my life each month. I'd like to start with low hanging fruit for obvious reasons - but when I sat down to think of improvements, I found myself thinking of the same old things I'd already been planning to do anyway... and I'd like out of that rut.
Constraints/more info:
1. be concrete. I know - "spend money on experiences" is a good idea - but what experiences are the best option to purchase *first*
2. "better" is deliberately left vague - choose how you would define it, so that I'm not constrained just by ways of "being better" that I'd have thought of myself.
3. please assume that I have all my basic needs met (eg food, clothing, shelter) and that I have budgeted separately for things like investing for my financial future and for charity.
4. apart from the above, assume nothing - Especially don't try and tailor solutions to anything you might know and/or guess about me specifically, because I think this would be a useful resource for others who might have just begun.
5. don't constrain yourself to exactly $100 - I could buy 2-3 things for that, or I could save up over a couple of months and buy something more expensive... I picked $100 because it's a round number and easy to imagine.
6. it's ok to add "dumb" things - they can help spur great ideas, or just get rid of an elephant in the room.
7. try thinking of your top-ten before reading any comments, in order not to bias your initial thinking. Then come back and add ten more once you've been inspired by what everyone else came up with.
Background:
This is a question I recently posed to my local Less Wrong group and we came up with a few good ideas, so I thought I'd share the discussion with the wider community and see what we can come up with. I'll add the list we came up with later on in the comments...
It'd be great to have a repository of low- |
00580eaf-d1e1-4526-bb2c-615bb2906f28 | trentmkelly/LessWrong-43k | LessWrong | [GPT-4] On the Gradual Emergence of Mechanized Intellect: A Treatise from the Year 1924
Editors note: This treatise was found in the Global Preservation and Technology Archive - 4th Edition (GPT-4). It makes a compelling argument that artificial general intelligence will have a "slow takeoff", developing over centuries.
In the year of our Lord 1924, as humanity stands amidst the clanking machineries and humming dynamos of the industrial age, a bold and fantastical vision emerges from the fertile grounds of speculation: the advent of Mechanized Intellect. This treatise endeavors to chart the course towards such a future, navigating through the technological and conceptual wilderness of our present era.
The Foundation: The Electrification of Calculative Machinery
Our initial foray into the realm of Mechanized Intellect brings us to the crossroads of power and precision. Within the present epoch, steam and clockwork reign supreme, driving the engines of industry and the tickings of time. Yet, when summoned to the task of emulating the subtleties of human thought, they reveal their inherent limitations. Steam, for all its might, lacks the delicacy required for intricate calculations, while clockwork, though precise, cannot adapt nor learn as the faculties of the mind demand.
Thus, our gaze turns to electricity, a force of nature tamed but only just, as the beacon of hope. The challenges that stand in our way are not trifling: the generation of electric power in volumes vast enough to quench the thirst of intellect machinery; the safe conveyance of this potent force across the leviathan spans of land and sea; and the invention of devices capable of modulating this power with the finesse required for thought. These hurdles, monumental in their scale, underscore the nascent state of our electrical arts and the daring of our ambition.
The Mechanism of Thought: The Labyrinth of Conditional Probabilities
Venturing deeper into the machinations of Mechanized Intellect, we confront the enigma of imbuing our creation with the ability to think beyond mere numbe |
56142b15-4c36-42e5-a387-367f8f2c355c | trentmkelly/LessWrong-43k | LessWrong | Telopheme, telophore, and telotect
[Metadata: crossposted from https://tsvibt.blogspot.com/2023/06/telopheme-telophore-and-telotect.html. First completed June 7, 2023.]
To come to know that a mind will have some specified ultimate effect on the world, first come to know, narrowly and in full, what about the mind makes it have effects on the world.
The fundamental question
Suppose there is a strong mind that has large effects on the word. What determines the effects of the mind?
What sort of object is this question asking for? Most obviously it's asking for a sort of "rudder" for a mind: an element of the mind that can be easily tweaked by an external specifier to "steer" the mind, i.e. to specify the mind's ultimate effects on the world. For example, a utility function for a classical agent is a rudder.
But in asking the fundamental question that way——asking for a rudder——that essay losses grasp of the slippery question and the real question withdraws. The section of that essay on The word "What", as in ¿What sort of thing is a "what" in the question "What determines a mind's effects?", brushes against the border of this issue but doesn't trek further in. That section asks:
> What sort of element can determine a mind's effects?
It should have asked more fully:
> What are the preconditions under which an element can (knowably, wieldily, densely) determine a mind's effects?
That is, what structure does a mind have to possess, so that there can be an element that determines the mind's ultimate effects?
To put it another way: asking how to "put a goal into an agent" makes it sound like there's a slot in the agent for a goal; asking how to "point the agent" makes it sound like the agent has the capacity to go in a specified direction. Here the question is, what does an agent need to have, if it has the capacity to go in a specified direction? What is the mental context in which a goal unfolds so that the goal is a goal? What do we necessarily think of an agent as having or being, when we think o |
7d562405-254e-42d9-ad18-0a9a6c33b7df | StampyAI/alignment-research-dataset/arxiv | Arxiv | Weight Agnostic Neural Networks
1 Introduction
---------------
In biology, precocial species are those whose young already possess certain abilities from the moment of birth. There is evidence to show that lizard [miles1995morphological](#bib.bib73) and snake [burger1998antipredator](#bib.bib13) ; [mori2000does](#bib.bib76) hatchlings already possess behaviors to escape from predators. Shortly after hatching, ducks are able to swim and eat on their own [starck1998patterns](#bib.bib104) , and turkeys can visually recognize predators [goth2001innate](#bib.bib28) . In contrast, when we train artificial agents to perform a task, we typically choose a neural network architecture we believe to be suitable for encoding a policy for the task, and find the weight parameters of this policy using a learning algorithm.
Inspired by precocial behaviors evolved in nature, in this work, we develop neural networks with architectures that are naturally capable of performing a given task even when its weight parameters are randomly sampled.
By using such neural network architectures, our agents can already perform well in their environment without the need to learn weight parameters.
\includegraphics
[width=1]img/cover.pdf
Figure 1:
Examples of Weight Agnostic Neural Networks: Bipedal Walker (left), Car Racing (right)
We search for architectures by deemphasizing weights. In place of training, networks are assigned a single shared weight value at each rollout. Architectures that are optimized for expected performance over a wide range of weight values are still able to perform various tasks without weight training.
Decades of neural network research have provided building blocks with strong inductive biases for various task domains. Convolutional networks [lecun1995convolutional](#bib.bib56) ; [fukushima1982neocognitron](#bib.bib24) are especially suited for image processing [cohen2016inductive](#bib.bib16) . For example, Ulyanov et al. [ulyanov2018deep](#bib.bib109) demonstrated that even a randomly-initialized CNN can be used as a handcrafted prior for image processing tasks such as superresolution and inpainting. Schmidhuber et al. [evolino](#bib.bib96) have shown that a randomly-initialized LSTM [lstm](#bib.bib45) with a learned linear output layer can predict time series where traditional RNNs fail. More recent developments in self-attention [vaswani2017attention](#bib.bib113) and capsule [sabour2017dynamic](#bib.bib93) networks expand the toolkit of building blocks for creating architectures with strong inductive biases for various tasks. Fascinated by the intrinsic capabilities of randomly-initialized CNNs and LSTMs, we aim to search for weight agnostic neural networks, architectures with strong inductive biases that can already perform various tasks with random weights.
In order to find neural network architectures with strong inductive biases, we propose to search for architectures by deemphasizing the importance of weights. This is accomplished by (1) assigning a single shared weight parameter to every network connection and (2) evaluating the network on a wide range of this single weight parameter. In place of optimizing weights of a fixed network, we optimize instead for architectures that perform well over a wide range of weights. We demonstrate our approach can produce networks that can be expected to perform various continuous control tasks with a random weight parameter. As a proof of concept, we also apply our search method on a supervised learning domain, and find it can discover networks that, even without explicit weight training, can achieve a much higher than chance test accuracy of ∼ 92% on MNIST.
We hope our demonstration of such weight agnostic neural networks will encourage further research exploring novel neural network building blocks that not only possess useful inductive biases, but can also learn using algorithms that are not necessarily limited to gradient-based methods.111We release a software toolkit not only to facilitate reproduction, but also to further research in this direction. Refer to the Supplementary Materials for more information about the code repository.
2 Related Work
---------------
Our work has connections to existing work not only in deep learning, but also to various other fields:
Architecture Search Search algorithms for neural network topologies originated from the field of evolutionary computing in the 1990s [harp1990designing](#bib.bib40) ; [dasgupta1992designing](#bib.bib17) ; [fullmer1992using](#bib.bib25) ; [gruau1996comparison](#bib.bib33) ; [krishnan1994delta](#bib.bib53) ; [braun1993evolving](#bib.bib8) ; [mandischer1993representation](#bib.bib70) ; [zhang1993evolving](#bib.bib117) ; [maniezzo1994genetic](#bib.bib71) ; [angeline1994evolutionary](#bib.bib1) ; [lee1996evolutionary](#bib.bib59) ; [opitz1997connectionist](#bib.bib81) ; [pujol1998evolving](#bib.bib86) ; [yao1998towards](#bib.bib116) . Our method is based on NEAT [neat](#bib.bib103) , an established topology search algorithm notable for its ability to optimize the weights and structure of networks simultaneously. In order to achieve state-of-the-art results, recent methods narrow the search space to architectures composed of basic building blocks with strong domain priors such as CNNs [zoph2016neural](#bib.bib119) ; [real2017large](#bib.bib89) ; [liu2017hierarchical](#bib.bib64) ; [miikkulainen2019evolving](#bib.bib72) , recurrent cells [jozefowicz2015empirical](#bib.bib48) ; [zoph2016neural](#bib.bib119) ; [miikkulainen2019evolving](#bib.bib72) and self-attention [so2019evolved](#bib.bib100) . It has been shown that random search can already achieve SOTA results if such priors are used [li2019random](#bib.bib63) ; [sciuto2019evaluating](#bib.bib97) ; [real2018regularized](#bib.bib88) . The inner loop for training the weights of each candidate architecture before evaluation makes the search costly, although efforts have been made to improve efficiency [pham2018efficient](#bib.bib85) ; [brock2017smash](#bib.bib9) ; [liu2018darts](#bib.bib65) . In our approach, we evaluate architectures without weight training, bypassing the costly inner loop,
similar to the random trial approach in [hinton1996learning](#bib.bib44) ; [smith1987learning](#bib.bib99) that evolved architectures to be more weight tolerant.
Bayesian Neural Networks The weight parameters of a BNN [mackay1992bayesian](#bib.bib68) ; [hinton1993keeping](#bib.bib43) ; [barber1998ensemble](#bib.bib3) ; [bishop2006pattern](#bib.bib4) ; [neal2012bayesian](#bib.bib78) ; [gal2016uncertainty](#bib.bib26) are not fixed values, but sampled from a distribution. While the parameters of this distribution can be learned [graves2011practical](#bib.bib29) ; [krueger2017bayesian](#bib.bib54) , the number of parameters is often greater than the number of weights. Recently, Neklyudov et al. [neklyudov2018variance](#bib.bib79) proposed variance networks, which sample each weight from a distribution with a zero mean and a learned variance parameter, and show that ensemble evaluations can improve performance on image recognition tasks. We employ a similar approach, sampling weights from a fixed uniform distribution with zero mean, as well as evaluating performance on network ensembles.
Algorithmic Information Theory In AIT [solomonoff1964formal](#bib.bib101) , the Kolmogorov complexity [kolmogorov1965three](#bib.bib51) of a computable object is the minimum length of the program that can compute it. The Minimal Description Length (MDL) [rissanen1978modeling](#bib.bib91) ; [grunwald2007minimum](#bib.bib34) ; [rissanen2007information](#bib.bib92) is a formalization of Occam’s razor, in which a good model is one that is best at compressing its data, including the cost of describing of the model itself. Ideas related to MDL for making neural networks “simple” was proposed in the 1990s, such as simplifying networks by soft-weight sharing [nowlan1992simplifying](#bib.bib80) , reducing the amount of information in weights by making them noisy [hinton1993keeping](#bib.bib43) , and simplifying the search space of its weights [schmidhuber1997discovering](#bib.bib95) . Recent works offer a modern treatment [blier2018description](#bib.bib6) and application [li2018measuring](#bib.bib61) ; [trask2018neural](#bib.bib108) of these principles in the context of larger, deep neural network architectures.
While the aforementioned works focus on the information capacity required to represent the weights of a predefined network architecture, in this work we focus on finding minimal architectures that can represent solutions to various tasks. As our networks still require weights, we borrow ideas from AIT and BNN, and take them a bit further. Motivated by MDL, in our approach, we apply weight-sharing to the entire network and treat the weight as a random variable sampled from a fixed distribution.
Network Pruning By removing connections with small weight values from a trained neural network, pruning approaches [lecun1990optimal](#bib.bib58) ; [hassibi1993second](#bib.bib41) ; [han2015learning](#bib.bib39) ; [guo2016dynamic](#bib.bib35) ; [li2016pruning](#bib.bib62) ; [molchanov2016pruning](#bib.bib75) ; [luo2017thinet](#bib.bib67) ; [liu2018rethinking](#bib.bib66) ; [mallya2018piggyback](#bib.bib69) can produce sparse networks that keep only a small fraction of the connections, while maintaining similar performance on image classification tasks compared to the full network. By retaining the original weight initialization values, these sparse networks can even be trained from scratch to achieve a higher test accuracy [frankle2018lottery](#bib.bib22) than the original network. Similar to our work, a concurrent work [zhou2019deconstructing](#bib.bib118) found pruned networks that can achieve image classification accuracies that are much better than chance even with randomly initialized weights.
Network pruning is a complementary approach to ours; it starts with a full, trained network, and takes away connections, while in our approach, we start with no connections, and add complexity as needed. Compared to our approach, pruning requires prior training of the full network to obtain useful information about each weight in advance. In addition, the architectures produced by pruning are limited by the full network, while in our method there is no upper bound on the network’s complexity.
Neuroscience A connectome [seung2012connectome](#bib.bib98) is the “wiring diagram” or mapping of all neural connections of the brain. While it is a challenge to map out the human connectome [sporns2005human](#bib.bib102) , with our 90 billion neurons and 150 trillion synapses, the connectome of simple organisms such as roundworms [white1986structure](#bib.bib114) ; [varshney2011structural](#bib.bib112) has been constructed, and recent works [eichler2017complete](#bib.bib20) ; [takemura2017connectome](#bib.bib105) mapped out the entire brain of a small fruit fly. A motivation for examining the connectome, even of an insect, is that it will help guide future research on how the brain learns and represents memories in its connections. For humans it is evident, especially during early childhood [huttenlocher1990morphometric](#bib.bib46) ; [tierney2009brain](#bib.bib107) , that we learn skills and form memories by forming new synaptic connections, and our brain rewires itself based on our new experiences [black1990learning](#bib.bib5) ; [bruer1999neural](#bib.bib11) ; [kleim2002motor](#bib.bib50) ; [dayan2011neuroplasticity](#bib.bib18) .
The connectome can be viewed as a graph [bullmore2009complex](#bib.bib12) ; [he2010graph](#bib.bib42) ; [van2011rich](#bib.bib110) , and analyzed using rich tools from graph theory, network science and computer simulation. Our work also aims to learn network graphs that can encode skills and knowledge for an artificial agent in a simulation environment. By deemphasizing learning of weight parameters, we encourage the agent instead to develop ever-growing networks that can encode acquired skills based on its interactions with the environment. Like the connectome of simple organisms, the networks discovered by our approach are small enough to be analyzed.
3 Weight Agnostic Neural Network Search
----------------------------------------
Creating network architectures which encode solutions is a fundamentally different problem than that addressed by neural architecture search (NAS).
The goal of NAS techniques is to produce architectures which, once trained, outperform those designed by humans.
It is never claimed that the solution is innate to the structure of the network.
Networks created by NAS are exceedingly ‘trainable’ – but no one supposes these networks will solve the task without training the weights.
The weights are the solution; the found architectures merely a better substrate for the weights to inhabit.
To produce architectures that themselves encode solutions, the importance of weights must be minimized.
Rather than judging networks by their performance with optimal weight values, we can instead measure their performance when their weight values are drawn from a random distribution.
Replacing weight training with weight sampling ensures that performance is a product of the network topology alone.
Unfortunately, due to the high dimensionality, reliable sampling of the weight space is infeasible for all but the simplest of networks.
Though the curse of dimensionality prevents us from efficiently sampling high dimensional weight spaces,
by enforcing weight-sharing on all weights, the number of weight values is reduced to one.
Systematically sampling a single weight value is straight-forward and efficient, enabling us to approximate network performance in only a handful of trials.
This approximation can then be used to drive the search for ever better architectures.
The search for these weight agnostic neural networks (WANNs) can be summarized as follows (See Figure [2](#S3.F2 "Figure 2 ‣ 3 Weight Agnostic Neural Network Search ‣ Weight Agnostic Neural Networks") for an overview):
(1) An initial population of minimal neural network topologies is created,
(2) each network is evaluated over multiple rollouts, with a different shared weight value assigned at each rollout,
(3) networks are ranked according to their performance and complexity, and
(4) a new population is created by varying the highest ranked network topologies, chosen probabilistically through tournament selection [tournamentSelection](#bib.bib74) .
The algorithm then repeats from (2), yielding weight agnostic topologies of gradually increasing complexity that perform better over successive generations.
\includegraphics
[width=1]img/wann.pdf
Figure 2:
Overview of Weight Agnostic Neural Network Search
Weight Agnostic Neural Network Search avoids weight training while exploring the space of neural network topologies by sampling a single shared weight at each rollout.
Networks are evaluated over several rollouts. At each rollout a value for the single shared weight is assigned and the cumulative reward over the trial is recorded.
The population of networks is then ranked according to their performance and complexity.
The highest ranking networks are then chosen probabilistically and varied randomly to form a new population, and the process repeats.
Topology Search
The operators used to search for neural network topologies are inspired by the well-established neuroevolution algorithm NEAT [neat](#bib.bib103) .
While in NEAT the topology and weight values are optimized simultaneously, we ignore the weights and apply only topological search operators.
The initial population is composed of sparsely connected networks, networks with no hidden nodes and only a fraction of the possible connections between input and output.
New networks are created by modifying existing networks using one of three operators: insert node, add connection, or change activation (Figure [3](#S3.F3 "Figure 3 ‣ 3 Weight Agnostic Neural Network Search ‣ Weight Agnostic Neural Networks")).
To insert a node, we split an existing connection into two connections that pass through this new hidden node.
The activation function of this new node is randomly assigned.
New connections are added between previously unconnected nodes, respecting the feed-forward property of the network.
When activation functions of hidden nodes are changed, they are assigned at random.
Activation functions include both the common (e.g. linear, sigmoid, ReLU) and more exotic (Gaussian, sinusoid, step), encoding a variety of relationships between inputs and outputs.
\includegraphics
[width=1]img/topOper.pdf
Figure 3:
Operators for searching the space of network topologies
Left: A minimal network topology, with input and outputs only partially connected.
Middle: Networks are altered in one of three ways. Insert Node: a new node is inserted by splitting an existing connection. Add Connection: a new connection is added by connecting two previously unconnected nodes. Change Activation: the activation function of a hidden node is reassigned.
Right: Possible activation functions (linear, step, sin, cosine, Gaussian, tanh, sigmoid, inverse, absolute value, ReLU) shown over the range [2,2].
Performance and Complexity
Network topologies are evaluated using several shared weight values.
At each rollout a new weight value is assigned to all connections, and the network is tested on the task.
In these experiments we used a fixed series of weight values ([−2,−1,−0.5,+0.5,+1,+2]) to decrease the variance between evaluations.222Variations on these particular values had little effect, though weight values in the range [−2,2] showed the most variance in performance. Networks whose weight values were set to greater than 3 tended to perform similarly – presumably saturating many of the activation functions.
Weight values near 0 were also omitted to reduce computation, as regardless of the topology little to no signal was sent to the output.
We calculate the mean performance of a network topology by averaging its cumulative reward over all rollouts using these different weight values.
Motivated by algorithmic information theory [solomonoff1964formal](#bib.bib101) , we are not interested in searching merely for any weight agnostic neural networks, but networks that can be described with a minimal description length [rissanen1978modeling](#bib.bib91) ; [grunwald2007minimum](#bib.bib34) ; [rissanen2007information](#bib.bib92) .
Given two different networks with similar performance we prefer the simpler network.
By formulating the search as a multi-objective optimization problem [konak2006multi](#bib.bib52) ; [mouret2011novelty](#bib.bib77) we take into account the size of the network as well as its performance when ranking it in the population.
We apply the connection cost technique from [clune2013evolutionary](#bib.bib15) shown to produce networks that are more simple, modular, and evolvable.
Networks topologies are judged based on three criteria: mean performance over all weight values, max performance of the single best weight value, and the number of connections in the network.
Rather than attempting to balance these criteria with a hand-crafted reward function for each new task, we rank the solutions based on dominance relations [nsga2](#bib.bib19) .
Ranking networks in this way requires that any increase in complexity is accompanied by an increase in performance.
While encouraging minimal and modular networks, this constraint can make larger structural changes – which may require several additions before paying off – difficult to achieve.
To relax this constraint we rank by complexity only probabilistically: in 80% of cases networks are ranked according to mean performance and the number of connections, in the other 20% ranking is done by mean performance and max performance.
4 Experimental Results
-----------------------
Continuous Control
Weight agnostic neural networks (WANNs) are evaluated on three continuous control tasks.
The first, CartPoleSwingUp, is a classic control problem where, given a cart-pole system, a pole must be swung from a resting to upright position and then balanced, without the cart going beyond the bounds of the track.
The swingup task is more challenging than the simpler CartPole [openai\_gym](#bib.bib10) , where the pole starts upright. Unlike the simpler task, it cannot be solved with a linear controller [tedrake2009underactuated](#bib.bib106) ; [raiko2009variational](#bib.bib87) .
The reward at every timestep is based on the distance of the cart from track edge and the angle of the pole.
Our environment is closely based on the one described in [gal2016improving](#bib.bib27) ; [deepPILCOgithub](#bib.bib120) .
The second task, BipedalWalker-v2 [openai\_gym](#bib.bib10) , is to guide a two-legged agent across randomly generated terrain.
Rewards are awarded for distance traveled, with a cost for motor torque to encourage efficient movement.
Each leg is controlled by a hip and knee joint in reaction to 24 inputs, including LIDAR sensors which detect the terrain and proprioceptive information such as the agent’s joint speeds.
Compared to the low dimensional CartPoleSwingUp, BipedalWalker-v2 has a non-trivial number of possible connections, requiring WANNs to be selective about the wiring of inputs to outputs.
The third, CarRacing-v0 [openai\_gym](#bib.bib10) , is a top-down car racing from pixels environment.
A car, controlled with three continuous commands (gas, steer, brake) is tasked with visiting as many tiles as possible of a randomly generated track within a time limit.
Following the approach described in [ha2018worldmodels](#bib.bib38) , we delegate the pixel interpretation element of the task to a pre-trained variational autoencoder [kingma2013auto](#bib.bib49) ; [vae\_dm](#bib.bib90) (VAE) which compresses the pixel representation to 16 latent dimensions. These dimensions are given as input to the network.
The use of learned features tests the ability of WANNs to learn abstract associations rather than encoding explicit geometric relationships between inputs.
Hand-designed networks found in the literature [ha2018designrl](#bib.bib37) ; [ha2018worldmodels](#bib.bib38) are compared to the best weight agnostic networks found for each task. We compare the mean performance over 100 trials under 4 conditions:
1. Random weights: individual weights drawn from U(−2,2);
2. Random shared weight: a single shared weight drawn from U(−2,2);
3. Tuned shared weight: the highest performing shared weight value in range (−2,2);
4. Tuned weights: individual weights tuned using population-based REINFORCE [williams1992simple](#bib.bib115) .
| Swing Up |
| |
| --- |
| Random Weights |
|
| |
| --- |
| Random Shared Weight |
|
| |
| --- |
| Tuned Shared Weight |
|
| |
| --- |
| Tuned Weights |
|
| WANN | 57 ± 121 | 515 ± 58 | 723 ± 16 | 932 ± 6 |
| Fixed Topology | 21 ± 43 | 7 ± 2 | 8 ± 1 | 918 ± 7 |
| Biped |
| |
| --- |
| Random Weights |
|
| |
| --- |
| Random Shared Weight |
|
| |
| --- |
| Tuned Shared Weight |
|
| |
| --- |
| Tuned Weights |
|
| WANN | -46 ± 54 | 51 ± 108 | 261 ± 58 | 332 ± 1 |
| Fixed Topology | -129 ± 28 | -107 ± 12 | -35 ± 23 | 347 ± 1 [ha2018designrl](#bib.bib37) |
| CarRacing |
| |
| --- |
| Random Weights |
|
| |
| --- |
| Random Shared Weight |
|
| |
| --- |
| Tuned Shared Weight |
|
| |
| --- |
| Tuned Weights |
|
| WANN | -69 ± 31 | 375 ± 177 | 608 ± 161 | 893 ± 74 |
| Fixed Topology | -82 ± 13 | -85 ± 27 | -37 ± 36 | 906 ± 21 [ha2018worldmodels](#bib.bib38) |
Table 1:
Performance of Randomly Sampled and Trained Weights for Continuous Control Tasks
We compare the mean performance (over 100 trials) of the best weight agnostic network architectures found with standard feed forward network policies commonly used in previous work (i.e. [ha2018designrl](#bib.bib37) ; [ha2018worldmodels](#bib.bib38) ).
The intrinsic bias of a network topology can be observed by measuring its performance using a shared weight sampled from a uniform distribution.
By tuning this shared weight parameter we can measure its maximum performance.
To facilitate comparison to baseline architectures we also conduct experiments where networks are allowed unique weight parameters and tuned.
The results are summarized in Table [1](#S4.T1 "Table 1 ‣ 4 Experimental Results ‣ Weight Agnostic Neural Networks").333We conduct several independent search runs to measure variability of results in Supplementary Materials.
In contrast to the conventional fixed topology networks used as baselines, which only produce useful behaviors after extensive tuning, WANNs perform even with random shared weights.
Though their architectures encode a strong bias toward solutions, WANNs are not completely independent of the weight values – they do fail when individual weight values are assigned randomly.
WANNs function by encoding relationships between inputs and outputs, and so while the importance of the magnitude of the weights is not critical, their consistency, especially consistency of sign, is. An added benefit of a single shared weight is that it becomes trivial to tune this single parameter, without requiring the use of gradient-based methods.
The best performing shared weight value produces satisfactory if not optimal behaviors: a balanced pole after a few swings, effective if inefficient gaits, wild driving behaviour that cuts corners.
These basic behaviors are encoded entirely within the architecture of the network.
And while WANNs are able to perform without training, this predisposition does not prevent them from reaching similar state-of-the-art performance when the weights are trained.
\includegraphics
[width=1]img/swing.pdf
Figure 4:
Development of Weight Agnostic topologies over time
Generation 8: An early network which performs poorly with nearly all weights.
Generation 32: Relationships between the position of the cart and velocity of the pole are established. The tension between these relationships produces both centering and swing-up behavior.
Generation 128: Complexity is added to refine the balancing behavior of the elevated pole.
As the networks discovered are small enough to interpret, we can derive insights into how they function by looking at network diagrams (See Figure [4](#S4.F4 "Figure 4 ‣ 4 Experimental Results ‣ Weight Agnostic Neural Networks")).
Examining the development of a WANN which solves CartPoleSwingUp is also illustrative of how relationships are encoded within an architecture.
In the earliest generations the space of networks is explored in an essentially random fashion.
By generation 32, preliminary structures arise which allow for consistent performance: the three inverters applied to the x position keep the cart from leaving the track. The center of the track is at 0, left is negative, right is positive.
By applying positive force when the cart is in a negative position and vice versa a strong attractor towards the center of the track is encoded.
The interaction between the regulation of position and the Gaussian activation on dθ is responsible for the swing-up behavior, also developed by generation 32.
At the start of the trial the pole is stationary: the Gaussian activation of dθ is 1 and force is applied.
As the pole moves toward the edge the nodes connected to the x input, which keep the cart in the center, begin sending an opposing force signal.
The cart’s progress toward the edge is slowed and the change in acceleration causes the pole to swing, increasing dθ and so decreasing the signal that is pushing the cart toward the edge.
This slow down causes further acceleration of the pole, setting in motion a feedback loop that results in the rapid dissipation of signal from dθ.
The resulting snap back of the cart towards the center causes the pole to swing up.
As the pole falls and settles the same swing up behavior is repeated, and the controller is rewarded whenever the pole is upright.
As the search process continues, some of these controllers linger in the upright position longer than others, and by generation 128, the lingering duration is long enough for the pole to be kept balanced.
Though this more complicated balancing mechanism is less reliable under variable weights than the swing-up and centering behaviors, the more reliable behaviors ensure that the system recovers and tries again until a balanced state is found. Notably, as these networks encode relationships and rely on tension between systems set against each other, their behavior is consistent with a wide range of shared weight values.
For video demonstrations of the policies learned at various developmental phases of the weight agnostic topologies, please refer to the [supplementary website](http://%5Cwebsiteurl).
WANN controllers for BipedalWalker-v2 and CarRacing-v0 (Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Weight Agnostic Neural Networks"), page 1) are likewise remarkable in their simplicity and modularity.
The biped controller uses only 17 of the 25 possible inputs, ignoring many LIDAR sensors and knee speeds.
The WANN architecture not only solves the task without training the individual weights, but uses only 210 connections, an order of magnitude fewer than commonly used topologies (2804 connections used in the SOTA baseline [ha2018designrl](#bib.bib37) ).
The architecture which encodes stable driving behavior in the car racer is also striking in its simplicity (Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Weight Agnostic Neural Networks"), right).
Only a sparsely connected two layer network and a single weight value is required to encode competent driving behavior.
While the SOTA baseline [ha2018worldmodels](#bib.bib38) also gave the hidden states of a pre-trained RNN world model, in addition to the VAE’s representation to its controller, our controller operates on the VAE’s latent space alone. Nonetheless, it was able to develop a feed-forward controller that achieves a comparable score. Future work will explore removing the feed-forward constraint from the search to allow WANNs to develop recurrent connections with memory states.
Classification
Promising results on reinforcement learning tasks lead us to consider how widely a WANN approach can be applied. WANNs which encode relationships between inputs are well suited to RL tasks: low-dimensional inputs coupled with internal states and environmental interaction allow discovery of reactive and adaptive controllers. Classification, however, is a far less fuzzy and forgiving problem. A problem where, unlike RL, design of architectures has long been a focus. As a proof of concept, we investigate how WANNs perform on the MNIST dataset [lecun1998mnist](#bib.bib55) , an image classification task which has been a focus of human-led architecture design for decades [lecun1998gradient](#bib.bib57) ; [chollet2015keras](#bib.bib14) ; [sabour2017dynamic](#bib.bib93) .
Even in this high-dimensional classification task WANNs perform remarkably well (Figure [5](#S4.F5 "Figure 5 ‣ 4 Experimental Results ‣ Weight Agnostic Neural Networks"), Left). Restricted to a single weight value, WANNs are able to classify MNIST digits as well as a single layer neural network with thousands of weights trained by gradient descent. The architectures created still maintain the flexibility to allow weight training, allowing further improvements in accuracy.
\includegraphics
[width=1]img/mnist.pdf
Figure 5:
Classification Accuracy on MNIST.
Left:
WANNs instantiated with multiple weight values acting as an ensemble perform far better than when weights are sampled at random, and as well as a linear classifier with thousands of weights.
Right: No single weight value has better accuracy on all digits. That WANNs can be instantiated as several different networks has intriguing possibilities for the creation of ensembles.
It is straight forward to sweep over the range of weights to find the value which performs best on the training set, but the structure of WANNs offers another intriguing possibility.
At each weight value the prediction of a WANN is different.
On MNIST this can be seen in the varied accuracy on each digit (Figure [5](#S4.F5 "Figure 5 ‣ 4 Experimental Results ‣ Weight Agnostic Neural Networks"), Right).
Each weight value of the network can be thought of as a distinct classifier, creating the possibility of using one WANN with multiple weight values as a self-contained ensemble.
In the simplest ensemble approach, a collection of networks are created by instantiating a WANN with a range of weight values.
Each of these networks is given a single vote, and the ensemble classifies samples according to the category which received the most votes.
This approach yields predictions far more accurate than randomly selected weight values, and only slightly worse than the best possible weight.
That the result of this naive ensemble is successful is encouraging for experimenting with more sophisticated ensemble techniques when making predictions or searching for architectures.
5 Discussion and Future Work
-----------------------------
In this work we introduced a method to search for simple neural network architectures with strong inductive biases for performing a given task.
Since the networks are optimized to perform well using a single weight parameter over a range of values, this single parameter can easily be tuned to increase performance.
Individual weights can be further tuned from a best shared weight. The ability to quickly fine-tune weights is useful in few-shot learning [finn2017model](#bib.bib21) and may find uses in continual lifelong learning where agents continually acquire, fine-tune, and transfer skills throughout their lifespan [parisi2018continual](#bib.bib83) .
Early works [hinton1996learning](#bib.bib44) ; [smith1987learning](#bib.bib99) connected the evolution of weight tolerant networks to the Baldwin effect [baldwin1896new](#bib.bib2) .
To develop a single WANN capable of encoding many different useful tasks in its environment, one might consider developing a WANN with a strong intrinsic bias for intrinsic motivation [schmidhuber1991curious](#bib.bib94) ; [oudeyer2007intrinsic](#bib.bib82) ; [pathak2017curiosity](#bib.bib84) , and continuously optimize its architecture to perform well at pursuing novelty in an open-ended environment [lehman2008exploiting](#bib.bib60) . Such a WANN might encode, through a curiosity reward signal, a multitude of skills that can easily be fine-tuned for a particular downstream task in its environment later on.
While our approach learns network architectures of increasing complexity by adding connections, network pruning approaches find new architectures by their removal. It is also possible to learn a pruned network capable of performing additional tasks without learning weights [mallya2018piggyback](#bib.bib69) . A concurrent work [zhou2019deconstructing](#bib.bib118) to ours learns a supermask where the sub-network pruned using this mask performs well at image recognition even with randomly initialized weights – it is interesting that their approach achieves a similar range of performance on MNIST compared to ours. While our search method is based on evolution, future work may extend the approach by incorporating recent ideas that formulate architecture search in a differentiable manner [liu2018darts](#bib.bib65) to make the search more efficient.
The success of deep learning is attributed to our ability to train the weights of large neural networks that consist of well-designed building blocks on large datasets, using gradient descent. While much progress has been made, there are also limitations, as we are confined to the space of architectures that gradient descent is able to train. For instance, effectively training models that rely on discrete components [jang2016categorical](#bib.bib47) ; [graves2014neural](#bib.bib31) or utilize adaptive computation mechanisms [graves2016adaptive](#bib.bib30) with gradient-based methods remain a challenging research area. We hope this work will encourage further research that facilitates the discovery of new architectures that not only possess inductive biases for practical domains, but can also be trained with algorithms that may not require gradient computation.
That the networks found in this work do not match the performance of convolutional neural networks is not surprising. It would be an almost embarrassing achievement if they did. For decades CNN architectures have been refined by human scientists and engineers – but it was not the reshuffling of existing structures which originally unlocked the capabilities of CNNs. Convolutional layers were themselves once novel building blocks, building blocks with strong biases toward vision tasks, whose discovery and application have been instrumental in the incredible progress made in deep learning.
The computational resources available to the research community have grown significantly since the time convolutional neural networks were discovered. If we are devoting such resources to automated discovery and hope to achieve more than incremental improvements in network architectures, we believe it is also worth experimenting with new building blocks, not just their arrangements.
Acknowledgments
---------------
We would like to thank Douglas Eck, Geoffrey Hinton, Anja Austermann, Jeff Dean, Luke Metz, Ben Poole, Jean-Baptiste Mouret, Michiel Adriaan Unico Bacchiani, Heiga Zen, and Alex Lamb for their thoughtful feedback. Experiments in this work were conducted with the support of Google Cloud. |
368076ec-9dc3-4d4e-aded-a11f1b2f3569 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Rationality Meetup Vienna
Discussion article for the meetup : Rationality Meetup Vienna
WHEN: 17 October 2015 03:00:00PM (+0200)
WHERE: Kaisermühlenstraße 24, Vienna
Event on Facebook: https://www.facebook.com/events/1626674870922372/ You need to be part of this group to see it: https://www.facebook.com/groups/rationalityvienna/
Location http://web.student.tuwien.ac.at/~e0326238/rationality_meetup/directions.html !Alternative meetup room this time!
Topic: Not defined yet
Discussion article for the meetup : Rationality Meetup Vienna |
1109a728-69ff-4e4a-a359-59262474a11a | trentmkelly/LessWrong-43k | LessWrong | AutoInterpretation Finds Sparse Coding Beats Alternatives
Produced as part of the SERI ML Alignment Theory Scholars Program - Summer 2023 Cohort
Huge thanks to Logan Riggs, Aidan Ewart, Lee Sharkey, Robert Huben for their work on the sparse coding project, Lee Sharkey and Chris Mathwin for comments on the draft, EleutherAI for compute and OpenAI for GPT-4 credits.
Summary
We use OpenAI's automatic interpretation protocol to analyse features found by dictionary learning using sparse coding and compare the interpretability scores thereby found to a variety of baselines.
We find that for both the residual stream (layer 2) and MLP (layer 1) of Eleuther's Pythia70M, sparse coding learns a set of features that is superior to all tested baselines, even when removing the bias and looking just at the learnt directions. In doing so we provide additional evidence to the hypothesis that NNs should be conceived as using distributed representations to represent linear features which are only weakly anchored to the neuron basis.
Figure 1: Top-and-random interpretability scores for features found by sparse coding, compared with a variety of baselines, with means and 95% confidence intervals around mean.
As before these results are still somewhat preliminary and we hope to expand on them and make them more robust over the coming month or two, but we hope people find them fruitful sources of ideas. If you want to discuss, feel free to message me or head over to our thread in the EleutherAI discord.
All code available at the github repo.
Methods
Sparse Coding
The feature dictionaries learned by sparse coding are learnt by simple linear autoencoders with a sparsity penalty on the activations. For more background on the sparse coding approach to feature-finding see the Conjecture interim report that we're building from, or Robert Huben's explainer.
Automatic Interpretation
As Logan Riggs' recently found, many of the directions found through sparse coding seem highly interpretable, but we wanted a way to quantify this, and make sur |
430d794d-e250-4457-a12b-779dd16af281 | trentmkelly/LessWrong-43k | LessWrong | How to use and interpret activation patching
None |
cd530ccb-dde9-44d7-83f6-1dc593e9f16f | trentmkelly/LessWrong-43k | LessWrong | Should you change where you live? (also - a worked “how to solve a question”)
Original post: http://bearlamp.com.au/should-you-change-where-you-live-a-worked-how-to-solve-a-question/
It's not a hard question, but it potentially has a lot of moving parts.
This post is going to be two in one. The first is whether you should move geography, the second is how I go through a problem. In red.
First up - brainstorm ideas:
Meta-level
* Make a list of relevant factors of staying or going (then google it to check for any I missed)
* Decision making strategies
Object level
* Why did this come up?
* Make a list of things you wish were different with how you live now
* Make a list of features of your current geography
* Make a list of features that you know of in other geographies that you would like to obtain.
relevant factors
* Family
* Friends
* Relationships
* Population density
* Population diversity breakdown
* Local safety (bad neighbourhoods)
* Religion
* Politics, country-scale political climate
* Government structure, public welfare
* public transport
* cost of living
* qualify of food, variation of food, culture of food.
* exchange rate
* Normal temperature/weather/climate (rain, cloud, sun, heat, cold, wind)
* Extreme weather risk. (i.e. cyclones, earthquakes, bushfires)
* Work (and commute)
* Salary
* Pollution (Light, Air or noise pollution)
* Residential or natural environment, parks, trees, tall buildings...
* Ocean (if you swim, or like beach culture)
* Landmarks
* native plants, animals, diseases.
* culture, art.
* difficulty in moving
* opportunity/plans
* language barrier
* public amenities
* Education
* Dwelling -> upsize, downsize, sidegrade...
* Sleep - are you getting enough of it
* postage costs
Why did this come up?
Usually you are thinking of a seeding factor; a reason why you are moving. It will help to keep it in mind when planning other things. Is there something wrong or pushing you out, is the current location stagnant, is something pulling you? Write that down. Keep |
7591f7f2-e1d5-4c94-9406-8be6994d69b8 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Scalar reward is not enough for aligned AGI
This post was authored by Peter Vamplew and Cameron Foale (Federation University), and Richard Dazeley (Deakin University)
**Introduction**
Recently some of the most well-known researchers in reinforcement learning Silver, Singh, Precup and Sutton published a paper entitled [Reward is Enough](https://www.sciencedirect.com/science/article/pii/S0004370221000862), which proposes the reward-is-enough hypothesis: “Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment". Essentially, they argue that the overarching goal of maximising reward is sufficient to explain all aspects of natural and artificial intelligences.
Of specific interest to this forum is the contention that suitably powerful methods based on maximisation of a scalar reward (as in conventional reinforcement learning) provide a suitable pathway for the creation of artificial general intelligence (AGI). We are concerned that the promotion of such an approach by these influential researchers increases the risk of development of AGI which is not aligned with human interests, and this led us to work with a team of collaborators on a recent pre-print [Scalar Reward is Not Enough](https://arxiv.org/abs/2112.15422) which argues against the assumption made by the reward-is-enough hypothesis that scalar rewards are sufficient to underpin intelligence.
The aim of this post is to provide an overview of our arguments as they relate to the creation of aligned AGI. In this post we will focus on reinforcement learning methods, both because that is the main approach mentioned by Silver et al, and also because it is our own area of expertise. However the arguments apply to any form of AI based on maximisation of a numeric measure of reward or utility.
**Does aligned AGI require multiple objectives?**
In discussing the development of intelligence, Silver et al argue that complex, general intelligence may arise from the combination of complex environments and simple reward signals, and provide the following illustrative example:
> “*For example, consider a signal that provides +1 reward to the agent each time a round-shaped pebble is collected. In order to maximise this reward signal effectively, an agent may need to classify pebbles, to manipulate pebbles, to navigate to pebble beaches, to store pebbles, to understand waves and tides and their effect on pebble distribution, to persuade people to help collect pebbles, to use tools and vehicles to collect greater quantities, to quarry and shape new pebbles, to discover and build new technologies for collecting pebbles, or to build a corporation that collects pebbles.*”
>
>
Silver et al present the ability of a reward-maximising agent to develop such wide-ranging, impactful behaviours on the basis of a simple scalar reward as a positive feature of this approach to developing AI. However we were struck by the similarity between this scenario and the infamous [paper-clip](https://www.lesswrong.com/tag/paperclip-maximizer) [maximiser](https://www.decisionproblem.com/paperclips/index2.html)thought experiment which has been widely discussed in the AI safety literature. The dangers posed by unbounded maximisation of a simple objective are well-known in this community, and it is concerning to see them totally overlooked in a paper advocating RL as a means for creating AGI.
We have previously argued that the creation of human-aligned AI is an [inherently multiobjective problem](https://link.springer.com/article/10.1007/s10676-017-9440-6). By incorporating rewards for other objectives in addition to the primary objective (such as making paperclips or collecting rocks), the designer of an AI system can reduce the likelihood of unsafe behaviour arising. In addition to safety objectives, there may be many other aspects of desirable behaviour which we wish to encourage an AI/AGI to adopt – for example, adhering to legal frameworks, societal norms, ethical guidelines, etc. Of course, it may not be possible for an agent to simultaneously maximise all of these objectives (for example, sometimes illegal actions may be required in order to maximise safety; different ethical frameworks may be in disagreement in particular scenarios), and so we contend that it may be necessary to incorporate concepts from multiobjective decision-making in order to manage trade-offs between conflicting objectives.
Our collaborator Ben Smith and his colleagues Roland Pihlakas and Robert Klassert recently posted to this forum an excellent review of [the benefits of multiobjective approaches to AI safety](https://www.alignmentforum.org/posts/i5dLfi6m6FCexReK9/a-brief-review-of-the-reasons-multi-objective-rl-could-be), so rather than duplicating those arguments here we refer the reader to that post, and to our [prior paper](https://link.springer.com/article/10.1007/s10676-017-9440-6).
For the remainder of this post we assume that the aim is to create AGI which takes into account both a primary objective (such as collecting rocks) along with one or more alignment objectives, and we will consider the extent to which technical approaches based on either scalar or vector rewards (with a separate element for each objective) may achieve that goal.
**Does the reward-is-enough hypothesis only consider scalar rewards?**
A question which has arisen in previous online discussion of our pre-print is whether we are creating a straw-man in contending that Silver et al assume scalar rewards. While it is true that the reward-is-enough hypothesis (as quoted above) does not explicitly state any restriction on the nature of the reward, this is specified later in Section 2.4 (“*A reward is a special **scalar** observation Rt*"), and Silver et al also refer to Sutton’s [*reward hypothesis*](http://incompleteideas.net/rlai.cs.ualberta.ca/RLAI/rewardhypothesis.html) which states that “*all of what we mean by goals and purposes can be well thought of as maximization of the expected value of the cumulative sum of a received **scalar** signal (reward)*”. This view is also reflected in our prior conversations with the authors; following a presentation we gave on multiobjective reinforcement learning in 2015, Richard Sutton stated that “there is no such thing as a multiobjective problem”.
In Reward is Enough, Silver et al do acknowledge that multiple objectives may exist, but contend that these can be represented via a scalar reward signal (“*…a scalar reward signal can represent weighted combinations of objectives…*”). They also argue that scalar methods should be favoured over explicitly multiobjective approaches as they represent a more general solution (although we would argue that the multiobjective case with n>=1 objectives is clearly more general than the special case of scalar reward with n=1):
> *“Rather than maximising a generic objective defined by cumulative reward, the goal is often formulated separately for different cases: for example multi-objective learning, risk-sensitive objectives, or objectives that are specified by a human-in-the-loop …While this may be appropriate for specific applications, a solution to a specialised problem does not usually generalise; in contrast a solution to the general problem will also provide a solution for any special cases.”*
>
>
**Can a scalar reward adequately represent multiple objectives?**
As mentioned earlier, Silver al state that “*a scalar reward signal can represent weighted combinations of objectives”.*While this statement is true, the question remains as to whether this representation is sufficient to support optimal decision-making with regards to those objectives.
While Silver et al don’t clearly specify the exact nature of this representation, the mention of “weighted combinations” suggests that they are referring to a linear weighted sum of the objectives. This is the most widely adopted approach to dealing with multiple objectives in the scalar RL literature – for example, common benchmarks such as gridworlds often provide a reward of -1 on each time step to encourage rapid movement towards a goal state, and a separate negative reward for events such as colliding with walls or stepping in puddles). The assumption is that selecting an appropriate set of weights will allow the agent to discover a policy that produces the optimal trade-off between the two objectives. However, this may not be the case as or some environments the expected returns for certain policies may mean that there is [no set of weights that lead to the discovery of those policies](https://link.springer.com/chapter/10.1007/978-3-540-89378-3_37); if those policies do in fact correspond to the best compromise between the objectives then we may be forced to settle for a sub-optimal solution. Even if a policy is theoretically findable, identifying the weights that achieve this is non-trivial as the relationship between the weights and the returns achieved by the agent can be highly non-linear. We observed this in our recent work on [minimising side-effects](https://www.sciencedirect.com/science/article/abs/pii/S0952197621000336); for some problems tuning the agent to find a safe policy was much more difficult and time-consuming for single-objective agents than for multi-objective agents.
It is possible to address these issues by using a non-linear function to scalarise the objectives. However, this introduces several new problems. We will illustrate these by considering a scalarisation function that aims to maximise one objective subject to reaching a threshold on a second objective. For simplicity we will also assume an episodic task (i.e. one with a defined, reachable end state). On each timestep the performance with respect to each objective can be calculated, but cannot immediately be scalarised (as, for example, the reward with respect to an objective may first reach the threshold, before subsequently falling below it in later time-steps due to negative rewards). So, these per-objective values must be accumulated external to the agent, and the agent will receive zero reward on all time-steps except at the end of the episode, when the true scalarised value can be calculated and provided as a scalar reward. This has a number of implications:
* It results in a very sparse reward signal which will make learning slow. While this does not directly contradict the reward-is-enough hypothesis (which says nothing about learning *efficiency*), it is nevertheless an argument against adopting this approach, particularly for complex tasks.
* For stochastic environments, the optimal decision at any point in time depends not only on the current state of the environment but also on the per-objective rewards received so far. To ensure convergence of RL algorithms it becomes necessary to use an [augmented state](https://link.springer.com/article/10.1007/s00521-021-05859-1), which concatenates the environmental state with the vector of accumulated rewards. If we are providing the agent with this information as part of its state representation, then surely it makes sense to also leverage this information more directly rather than restricting it to maximising the sparse scalar reward?
* For non-linear scalarisations, we can distinguish between two different criteria which an agent [is seeking to optimise](https://arxiv.org/abs/2103.09568). An agent learning from a pre-scalarised reward can only aim to optimise with regards to the Expected Scalarised Return (ESR). However in some circumstances it may be more appropriate to maximise the Scalarised Expected Return (SER). An agent using a pre-scalarised reward cannot do this, whereas a multiobjective agent that has learned directly from vector rewards can. This distinction becomes particularly important in the context of multi-agent systems, where [the optimal solution may be completely different depending on which of these criteria each agent is aiming to maximise](https://arxiv.org/abs/2112.06500).
We also note that regardless of the nature of the scalarisation being used, an agent provided with a pre-scalarised scalar reward can only learn with regards to its current reward signal. If that signal changes, then the agent must discard its current policy and learn a new policy with regards to the modified reward. It may be possible to retain some of the prior learning, as a model-based agent could retain its model of state-transition dynamics; nevertheless it would still require considerable observations of the new reward signal before it could adapt its policy to that reward, and it will be performing sub-optimally prior to that point in time.
We believe this limitation of scalar RL is of particular significance in the context of aligned AGI. Human preferences are not fixed, either at an individual or societal level, and can in fact change very rapidly – the events of recent years have illustrated this with regards to issues such as climate change, animal rights and public health. It is vital that an aligned AGI can adapt rapidly, preferably immediately, to such changes, and methods based on scalar rewards cannot provide this level of flexibility.
Therefore, we advocate for the alternative approach of multiobjective reinforcement learning (MORL). In MORL the agent is directly provided with the vector rewards associated with whatever objectives we wish it to consider, and also with a function that defines the utility of the end-user of the system. Through experience the agent learns the vector-valued returns associated with actions and uses these in conjunction with the utility function to derive the policy that it follows. Note that while this also involves a scalarisation step in which vector values are converted to scalars, this occurs internally within the agent after it has been provided with the vector reward, whereas in scalar RL the scalarisation is external to the agent, before the reward is provided.
This change, when coupled with some algorithmic modifications, provides the following potential benefits (for more details on MORL algorithms and these benefits, please refer to this [survey](https://www.jair.org/index.php/jair/article/view/10836) and this [practical guide](https://arxiv.org/abs/2103.09568)):
* The utility function can be either linear or non-linear, as required to best match the user’s true utility, thereby placing no restrictions on the policies that can be discovered.
* The agent may optimise either for ESR or SER optimality as required to suit the context in which it is being applied.
* The agent can make use of the dense reward information provided by vector rewards, rather than sparser scalar rewards.
* By using off-policy learning, the agent can learn not just the policy that is optimal with regards to its current utility function, but also policies that would be optimal for all possible definitions of this function (this is known as [multi-policy learning](https://www.jmlr.org/papers/volume15/vanmoffaert14a/vanmoffaert14a.pdf)). This allows for rapid adaptation should this function alter (e.g. to reflect changes in the laws, norms or ethics of our society). It also facilitates human-in-the-loop decision making – rather than pre-defining the utility function, the agent can learn all possible optimal policies (or a subset thereof) and present them to [a human decision-maker who selects the policy](http://www.cs.ox.ac.uk/people/shimon.whiteson/pubs/roijersewrl15.pdf) which will actually be performed).
* In our opinion defining a vector-valued reward and associated utility function is more intuitive than attempting to construct a complicated scalar reward signal that correctly captures all the desired objectives. Therefore, this approach should reduce the risk of [reward misspecification](https://openai.com/blog/faulty-reward-functions/). It also enables the possibility of using several independently specified reward signals in order to further reduce this risk.
* The use of vector-values and multi-policy learning facilitates the production of more informative explanations than are possible within a scalar agent, as the agent can directly provide information about the trade-offs being made between objectives (e.g. “I chose to go left as this would only take a few seconds longer than going right, and reduced the chance of collision with a human by 50%”), whereas a scalar agent will be unable to extract such information from its pre-scalarised reward
**Are there downsides to multiobjective approaches?**
Multiobjective RL will generally be more computationally expensive than scalar RL (particularly scalar RL using linear scalarisation), and this difference will be greater as the number of objectives increases. However, the use of multi-policy learning offers the potential for substantial improvement in sample efficiency for environments where the reward/utility is subject to change.
In addition, MORL is a newer and less extensively studied area of research compared to scalar RL (more details on this below). As such MORL algorithms, and particularly the implementation of those algorithms, have yet to be as thoroughly optimised as their scalar RL counterparts, particularly in the area of deep RL for high-dimensional state spaces.
**What is the state of research into multiobjective aligned AI?**
Academic research in computational RL dates back at least 40 years and has seen steady growth since the turn of the century, with particularly rapid expansion over the last five years or so. Meanwhile there was minimal research in MORL prior to around 2013. While there has been [a rapid growth in papers in/on MORL since then, this has largely been matched by growth in RL research in general, with the result that MORL still constitutes less than 1% of RL research](https://twitter.com/amp1874/status/1478319755027095552). Given the importance of MORL to aligned AI, this is a situation which we hope will change in coming years.
Not surprisingly given the relatively short history of MORL research, there has so far been limited work in applications of MORL to aligned AI. However, research in this area has started to emerge in recent years, addressing varied topics such as reward misspecification, learning of ethics or norms, and interpretability and explainability. We have provided a short list of recommended reading at the end of this post, and we refer the reader again to the [post of Smith, Pihlakas and Klassert](https://www.alignmentforum.org/posts/i5dLfi6m6FCexReK9/a-brief-review-of-the-reasons-multi-objective-rl-could-be) for an overview of work in this area.
**Conclusion**
In conclusion we believe that multiobjective approaches are essential to developing human-aligned agents, and that the use of scalar rewards to create AGI is insufficient to adequately address the issue of alignment. We find Silver et al’s advocacy for this scalar approach concerning as they are highly influential researchers, and this article could lead to other researchers adopting this approach, which we believe has inherent risks which are not acknowledged in Reward is Enough. Our concerns are heightened by the fact the authors are based at DeepMind which, given their resources, would appear to be positioned as one of the most likely sources for the emergence of AGI.
**Recommended Reading**
***For general background on MORL:***
[Roijers, D. M., Vamplew, P., Whiteson, S., & Dazeley, R. (2013). A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research, 48, 67-113.](https://www.jair.org/index.php/jair/article/view/10836)
[Hayes, C. F., Rădulescu, R., Bargiacchi, E., Källström, J., Macfarlane, M., Reymond, M., ... & Roijers, D. M. (2021). A practical guide to multi-objective reinforcement learning and planning. arXiv preprint arXiv:2103.09568.](https://arxiv.org/abs/2103.09568)
***MORL for Aligned AI***
[Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27-40.](https://link.springer.com/article/10.1007/s10676-017-9440-6)
[Noothigattu, R., Bouneffouf, D., Mattei, N., Chandra, R., Madan, P., Varshney, K., ... & Rossi, F. (2018). Interpretable multi-objective reinforcement learning through policy orchestration. arXiv preprint arXiv:1809.08343.](https://arxiv.org/abs/1809.08343)
[Horie, N., Matsui, T., Moriyama, K., Mutoh, A., & Inuzuka, N. (2019). Multi-objective safe reinforcement learning: the relationship between multi-objective reinforcement learning and safe reinforcement learning. Artificial Life and Robotics, 24(3), 352-359.](https://link.springer.com/article/10.1007/s10015-019-00523-3)
[Noothigattu, R., Bouneffouf, D., Mattei, N., Chandra, R., Madan, P., Varshney, K. R., ... & Rossi, F. (2019). Teaching AI agents ethical values using reinforcement learning and policy orchestration. IBM Journal of Research and Development, 63(4/5), 2-1.](https://ieeexplore.ieee.org/abstract/document/8827920)
[Zhan, H., & Cao, Y. (2019). Relationship explainable multi-objective reinforcement learning with semantic explainability generation. arXiv preprint arXiv:1909.12268.](https://arxiv.org/pdf/1909.12268)
[Sukkerd, R., Simmons, R., & Garlan, D. (2020). Tradeoff-focused contrastive explanation for MDP planning. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (pp. 1041-1048). IEEE.](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9223614)
[Vamplew, P., Foale, C., Dazeley, R., & Bignold, A. (2021). Potential-based multiobjective reinforcement learning approaches to low-impact agents for AI safety. Engineering Applications of Artificial Intelligence, 100, 104186.](https://www.sciencedirect.com/science/article/abs/pii/S0952197621000336)
[Smith, B. J., Klassert, R., & Pihlakas, R. (2021) Soft maximin approaches to Multi-Objective Decision-making for encoding human intuitive values, MODeM Workshop](https://drive.google.com/file/d/1qufjPkpsIbHiQ0rGmHCnPymGUKD7prah/view)
[Huang, S., Abdolmaleki, A., Vezzani, G., Brakel, P., Mankowitz, D. J., Neunert, M., ... & Riedmiller, M. (2021, June). A Constrained Multi-Objective Reinforcement Learning Framework. In 5th Annual Conference on Robot Learning.](https://openreview.net/forum?id=YeJaZBXlhPX)
[Rodriguez-Soto, M., Lopez-Sanchez, M., & Rodriguez-Aguilar, J. A. (2021). Guaranteeing the Learning of Ethical Behaviour through Multi-Objective Reinforcement Learning.](https://www.iiia.csic.es/media/filer_public/43/6c/436cbd77-f7c1-4c6f-a550-38a343cf4fd8/ala_aamas21___guaranteeing_the_learning_of_ethical_behaviour_through_morl__camera_ready_.pdf)
[Peschl, M., Zgonnikov, A., Oliehoek, F. A., & Siebert, L. C. (2021). MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning. arXiv preprint arXiv:2201.00012.](https://arxiv.org/pdf/2201.00012) |
c4c5d0c3-b672-4cc6-a2d1-6e1ec4ecde6c | StampyAI/alignment-research-dataset/special_docs | Other | Should Artificial Intelligence Governance be Centralised? Six Design Lessons from History
Should Artificial Intelligence Governance be Centralised?
Six Design Lessons from History
Peter Cihon,1Matthijs M. Maas,1,2Luke Kemp3
1Centre for the Governance of AI, Future of Humanity Institute, University of Oxford
2Centre for International Law, Conflict, and Crisis, Faculty of Law, University of Copenhagen
3Centre for the Study of Existential Risk, University of Cambridge
petercihon@gmail.com; matthijs.maas@jur.ku.dk; ltk27@cam.ac.uk
Abstract
Can effective international governance for artificial intelli-
gence remain fragmented, or is there a need for a centralised
international organisation for AI? We draw on the history of
other international regimes to identify advantages and disad-
vantages in centralising AI governance. Some considerations,
such as efficiency and political power, speak in favour of cen-
tralisation. Conversely, the risk of creating a slow and brittle
institution speaks against it, as does the difficulty in securing
participation while creating stringent rules. Other considera-
tions depend on the specific design of a centralised institution.
A well-designed body may be able to deter forum shopping
and ensure policy coordination. However, forum shopping
can be beneficial and a fragmented landscape of institutions
can be self-organising. Centralisation entails trade-offs and
the details matter. We conclude with two core recommenda-
tions. First, the outcome will depend on the exact design of a
central institution. A well-designed centralised regime cover-
ing a set of coherent issues could be beneficial. But locking-in
an inadequate structure may pose a fate worse than fragmen-
tation. Second, for now fragmentation will likely persist. This
should be closely monitored to see if it is self-organising or
simply inadequate.
In 2018, Canada and France proposed the International
Panel on Artificial Intelligence (IPAI). After being rejected
at the G7 in 2019, negotiations shifted to the OECD and
are presently ongoing. As the field of AI continues to ma-
ture and spark public interest and legislative concern (Per-
rault et al. 2019), the priority of governance initiatives re-
flects the growing appreciation that AI has the potential
to dramatically change the world for both good and ill
(Dafoe 2018). Research into AI governance needs to keep
pace with policy-making and technological change. Choices
made today may have long-lasting impacts on policymak-
ers’ ability to address numerous AI policy problems (Cave
and´Oh´Eigeartaigh 2019). Effective governance can promote
safety, accountability, and responsible behaviour in the re-
search, development, and deployment of AI systems.
AI governance research to date has predominantly fo-
cused at the national and sub-national levels (Scherer 2016;
Equal contribution, order selected at random.
Working paper, last updated 15 December, 2019.Calo 2017; Gasser and Almeida 2017). Research into AI
global governance remains relatively nascent (though see
Butcher and Beridze 2019). Kemp et al. (2019) have called
for specialised, centralised intergovernmental agencies to
coordinate policy responses globally, and others have called
for a centralised ‘International Artificial Intelligence Organ-
isation’ (Erdelyi and Goldsmith 2018). Others favour more
decentralised arrangements based around ‘Governance Co-
ordinating Committees’, global standards, or existing in-
ternational law instruments (Wallach and Marchant 2018;
Cihon 2019; Kunz and ´O h´Eigeartaigh 2020).
No one has taken a step back to inquire: what would the
history of multilateralism suggest, given the state and tra-
jectory of AI? Should AI governance be centralised or de-
centralised? ‘Centralisation’, in this case, refers to the de-
gree to which the coordination, oversight and/or regulation
of a set of AI policy issues or technologies are housed under
a single (global) institution. This is not a binary choice; it
exists across a spectrum. Trade is highly (but not entirely)
centralised under the umbrella of the WTO. In contrast, en-
vironmental multilateralism is much more decentralised.
In this paper, we seek to help the community of re-
searchers, policymakers, and other stakeholders in AI gover-
nance understand the advantages and disadvantages of cen-
tralisation. This may help set terms and catalyse a much-
needed debate to inform governance design decisions. We
first outline the international governance challenges of AI,
and review early proposed global responses. We then draw
on existing literatures on regime fragmentation (Biermann et
al. 2009) and ‘regime complexes’ (Orsini, Morin, and Young
2013) to assess considerations in centralising the interna-
tional governance of AI. We draw on the history of other
international regimes1to identify considerations that speak
in favour or against designing a centralised regime com-
plex for AI. We conclude with two recommendations. First,
many trade-offs are contingent on how well-designed a cen-
tral body would be. An adaptable, powerful institution with
a manageable mandate would be beneficial, but a poorly de-
1A regime is a set of ‘implicit or explicit principles, norms, rules
and decision-making procedures around which actors’ expectations
converge in a given area of international relations’(Krasner 1982,
186).
signed body could prove a fate worse than fragmentation.
Second, for now there should be structured monitoring of ex-
isting efforts to see whether they are they are self-organising
or insufficient.
The State of AI Governance
There is debate as to whether AI is a single policy area or
a diverse series of issues. Some claim that AI cannot be co-
hesively regulated as it is a collection of disparate technolo-
gies, with different risk profiles across different applications
and industries (Stone et al. 2016). This is an important but
not entirely convincing objection. The technical field has no
settled definition for ‘AI’,2so it should be no surprise that
defining a manageable scope for AI governance will be dif-
ficult. Yet this challenge is not unique to AI: definitional is-
sues abound in areas such as environment and energy, but
have not figured prominently in debates over centralisation.
Indeed, energy and environment ministries are common at
the domestic level, despite problems in setting the bound-
aries of natural systems and resources.
We contend that there are numerous ways in which a
centralised body could be designed for AI governance. For
example, a centralised approach could carve out a subset
of interlinked AI issues to cover. This could involve fo-
cusing on the potentially high-risk applications of AI sys-
tems, such as AI-enabled cyberwarfare, lethal autonomous
weapons (LAWS), other advanced military applications, or
high-level machine intelligence (HLMI).3Another approach
could govern underlying hardware resources (e.g. large-
scale compute resources) or software libraries. We are ag-
nostic on the specifics of how centralisation could or should
be implemented, and instead focus on the costs and benefits
of centralisation in the abstract. The exact advantages and
disadvantages of centralisation are likely to vary depend-
ing on the institutional design. This is an important area
of further study, particularly once more specific proposals
are put forward. However, such work must be grounded in
a higher-level investigation of trade-offs in centralising AI
governance. It is this foundational analysis which we seek
to offer.
Numerous AI issues could benefit from international co-
operation. These include the potentially catastrophic appli-
cations mentioned above. It also encompasses more quo-
tidian uses, such as AI-enabled cybercrime; human health
applications; safety and regulation of autonomous vehicles
and drones; surveillance, privacy and data-use; and labour
automation. Multilateral coordination could also use AI to
tackle other global problems such as climate change (see
Rolnick et al. 2019), or help meet the Sustainable Develop-
ment Goals (see Vinuesa et al. 2019). This is an illustrative
but not exhaustive list of international AI policy issues.
Global regulation across these issues is currently nascent,
2We define ‘AI’ as any machine system capable of function-
ing ‘appropriately and with foresight in its environment’ (Nilsson
2009, 13; see too Dafoe 2018, 5).
3‘High-level machine intelligence’ has been defined as ‘unaided
machines [that] can accomplish every task better and more cheaply
than human workers’ (Grace et al. 2018, 1).fragmented, yet evolving. A wide range of UN institutions
have begun to undertake some activities on AI (ITU 2019).
The bodies covering AI policy issues range across existing
organisations including the International Labour Organisa-
tion (ILO), International Telecommunication Union (ITU),
and UNESCO. This is complemented by budding regula-
tions and working groups across the International Organ-
isation for Standardisation (ISO), International Maritime
Organisation (IMO), International Civil Aviation Organisa-
tion (ICAO), and other bodies, as well as treaty amend-
ments, such as the updating of the Vienna Convention on
Road Traffic to encompass autonomous vehicles (Kunz and
´O h´Eigeartaigh 2020), or the ongoing negotiations at the
Convention on Certain Conventional Weapons (CCW) on
LAWS. The UN System Chief Executives Board (CEB) for
Coordination through the High-Level Committee on Pro-
grammes has been empowered to draft a system-wide AI
capacity building strategy. The High-level Panel on Digi-
tal Cooperation has also sought to gather together common
principles and ideas for AI relevant areas (High-Level Panel
2019). Whether these initiatives bear fruit, however, remains
questionable, as many of the involved international organ-
isations have fragmented membership, were not originally
created to address AI issues and lack effective enforcement
or compliance mechanisms (see Morin et al. 2019, 2).
The trajectory of these initiatives matters. How gover-
nance is initially organised can be central to its success. De-
bates over centralisation and fragmentation are long-lasting
and prominent with good reason. How we structure inter-
national cooperation can be critical to its success, and most
other debates often implicitly hinge on structural debates.
Fragmentation and centralisation exist across a spectrum. In
a world lacking a global government, some fragmentation
will always prevail. But the degree to which it prevails is
crucial. We define ‘fragmentation’ as a patchwork of inter-
national organisations and institutions which focus on a par-
ticular issue area, but differ in scope, membership and often
rules. Our other definitions for key terms are provided be-
low in Table 1. These definitions and terms are by nature
normatively loaded. For example, some may find ‘decen-
tralisation’ to be a positive framing, while others may see
‘fragmentation’ to possess negative connotations. Recognis-
ing this, we seek to use these terms in a primarily analytical
manner. We will use findings from each of these theoretical
areas to inform our discussion of the history of multilateral
fragmentation and its implications for AI governance.
Centralisation Criteria:
A History of Governance Trade-Offs
In the following discussion, we explore a series of consid-
erations for AI governance. Political power and efficient
participation support centralisation. The breadth vs. depth
dilemma, as well as slowness and brittleness support decen-
tralisation. Policy coordination and forum shopping consid-
erations can cut both ways.
Table 1: Definition of Key Governance Terms
Term Definition
Fragmentation or
DecentralizationA patchwork of international organisations and institutions which focus on a particular issue area but
differ in scope, membership and often rules (Biermann et al. 2009, 16).
Centralisation An arrangement in which governance of a particular issue lies under the authority of a single umbrella
body. This is a spectrum from highly centralised (the role of the WTO in trade) to decentralised (the
plethora of multilateral environmental agreements).
Regime Complex A network of three or more international regimes on a common issue area. These should have over-
lapping membership and cause potentially problematic interactions (Orsini, Morin, and Young 2013,
29).
1. Political Power
Regimes embody power in their authority over rules, norms,
and knowledge beyond states’ exclusive control. A more
centralised regime will see this power concentrated among
fewer institutions. A centralised, powerful architecture is
likely to be more influential against competing international
organisations and with constituent states (see Orsini, Morin,
and Young 2013, 36-7).
An absence of centralised authority to manage regime
complexes has presented challenges in the past. Across
the proliferation of Multilateral Environmental Agreements
(MEAs) there is no requirement to cede responsibility to
the UN Environmental Programme in the case of overlap
or competition. This has led to turf wars, inefficiencies and
even contradictory policies (Biermann et al. 2009). One of
the most notable examples is that of hydrofluorocarbons
(HFCs). HFCs are potent greenhouse gases, and yet their
use has been encouraged by the Montreal Protocol since
1987 as a replacement for ozone-depleting substances. This
has only recently been resolved via the 2015 Kigali Amend-
ment to the Montreal Protocol, which itself has a prolonged
implementation period. Similarly, the internet governance
regime complex is diffuse. Multiple venues and norms gov-
ern technical standards, cyber crime, human rights, and war-
fare (Nye 2014). Although the UN Internet Governance Fo-
rum (IGF) discusses several cross-cutting issues, it does not
have a mandate to consolidate even principles, let alone ne-
gotiate new formal agreements (Mueller, Mathiason, and
Klein 2007).
In contrast, other centralised regimes have supported ef-
fective management. For example, under the umbrella of
the WTO, norms such as the most-favoured-nation princi-
ple (equally treating all WTO member states) principle have
become the bedrock of international trade. The power and
track-record of the WTO is so formidable that it has created
achilling effect: the fear of colliding with WTO norms and
rules has led environmental treaties to self-censor and ac-
tively avoid discussing or deploying trade-related measures
(Eckersley 2004). Both the chilling effect and the remark-
ably powerful application of common trade rules were not
a marker of international trade until the establishment of the
WTO. The power of these centralised body has stretched be-
yond influencing states in the domain of trade, to mouldingrelated issues.
Political power offers further benefits in governing emerg-
ing technologies that are inherently uncertain in both sub-
stance and policy impact. Uncertainty in technology and
preferences has been associated with some increased cen-
tralisation in regimes (Koremenos, Lipson, and Snidal
2001a). There may also be benefits to housing a foresight
capacity within the regime complex, to allow for acceler-
ated or even proactive efforts (Pauwels 2019). Centralised
AI governance would enable an empowered organisation to
more effectively use foresight analyses to inform policy re-
sponses across the regime complex.
2. Supporting Efficiency & Participation
Decentralised AI governance may undermine efficiency and
inhibit participation. States often create centralised regimes
to reduce costs, for instance by eliminating duplicate efforts,
yielding economies of scale within secretariats, and sim-
plifying participation (Esty and Ivanova 2002). Conversely,
fragmented regimes may force states to spread resources and
funding over many distinct institutions, particularly limiting
the ability of less well-resourced states or parties to partici-
pate fully (Morin et al. 2019, 2).
Historically, decentralised regimes have presented cost
and related participation concerns. Hundreds of related and
sometimes overlapping international environmental agree-
ments can create ‘treaty congestion’ (Anton 2012). This
complicates participation and implementation for both de-
veloped and developing nations (Esty and Ivanova 2002).
This includes costs associated with travel to different fo-
rums, monitoring and reporting for a range of different bod-
ies, and duplication of effort by different secretariats (ibid.).
Similar challenges are already being witnessed in AI
governance. Simultaneous and globally distributed meet-
ings pose burdensome participation costs for civil society.
Fragmented organisations must duplicatively invest in high-
demand machine learning subject matter experts to inform
their activities. Centralisation would support institutional ef-
ficiency and participation.
3. Slowness & Brittleness of Centralised Regimes
One potential problem of centralisation lies in the relatively
slow process of establishing centralised institutions, which
may often be outpaced by the rate of technological change.
Another challenge lies in centralised institutions’ brittleness
after they are established, i.e., their vulnerability to regula-
tory capture, or failure to react to changes in the problem
landscape.
Establishing new international institutions is often a slow
process. For example, the Kyoto Protocol took three years
of negotiations to create and then another eight to enter into
force. This becomes even more onerous with higher partici-
pation and stakes. Under the GATT, negotiations for a 26%
cut in tariffs between 19 countries took 8 months in 1947.
The Uruguay round, beginning in 1986, took 91 months
to achieve a tariff reduction of 38% between 125 parties
(Martin and Messerlin 2007). International law has been
quick to respond to technological changes in some cases,
and delayed in others (Picker 2001, 184). Decentralised ef-
forts may prove quicker to respond to complex, ‘transver-
sal’ issues, if they rely more on informal institutions with
a smaller but like-minded membership (Morin et al. 2019,
2-3). Centralised AI governance may be particularly vulner-
able to sparking lengthy negotiations, because progress on
centralised regimes for new technologies tends to be hard if
a few states hold clearly unequal stakes in the technology,
or if there are significant differences in information and ex-
pertise among states or between states and private industry
(Picker 2001, 187-94). Both these conditions closely match
the context of AI technology. Moreover, because AI tech-
nology develops rapidly, such slow implementation of rules
and principles could lead to certain actors taking advantage
by setting de facto arrangements or extant state practice.
Even after its creation, a centralised regime can be brittle ;
the very qualities that provide it with political power may
exacerbate the adverse effects of regulatory capture; the fea-
tures that ensure institutional stability, may also mean that
the institution cannot adapt quickly to unanticipated outside
stressors outside its established mission. The regime might
break before it bends. The first potential risk is regulatory
capture. Given the high profile of AI issue areas, political in-
dependence is paramount. However, as illustrated by numer-
ous cases, including undue corporate influence in the WHO
during the 2009 H1N1 pandemic (Deshman 2011), no insti-
tution is fully immune to regime capture, and centralisation
may reduce the costs of lobbying, making capture easier by
providing a single locus of influence. On the other hand, a
regime complex comprising many parallel institutions could
find itself vulnerable to capture by powerful actors, who are
better positioned than smaller parties to send representatives
to every forum.
Moreover, centralised regimes entail higher stakes. Many
issues are in a single basket and thus failure is more likely
to be severe if it does occur. International institutions can be
notoriously path-dependent and thus fail to adjust to chang-
ing circumstances, as seen with the ILO’s considerable dif-
ficulties in reforming its participation and rulemaking pro-
cesses in the 1990s (Baccaro and Mele 2012). The public
failure of a flagship global AI institution or governance ef-
fort could have lasting political repercussions. It could stran-
gle subsequent, more well-conceived proposals in the crib,
by undermining confidence in multilateral governance gen-erally or capable governance on AI issues specifically. By
contrast, for a decentralized regime complex to similarly
fail, all of its component institutions would need to simul-
taneously ‘break’ or fail to innovate at once.4A centralised
institution that does not outright collapse, but which remains
ineffective, may become a blockade against better efforts.
Ultimately, brittleness is not an inherent weakness of
centralisation–and indeed depends far more on institutional
design details. There may be strategies to ‘innovation-
proof’(Maas 2019) governance regimes. Periodic renegoti-
ation, modular expansion, ‘principles based regulation’, or
sunset clauses can also support ongoing reform (see gener-
ally Marchant, Allenby, and Herkert 2011, 29-30). Such ap-
proaches have often proved successful historically, due par-
tially to decentralisation but, importantly, also to particular
designs.
4. The Breadth vs. Depth Dilemma
Pursuing centralisation may create an overly high threshold
that limits participation. All multilateral agreements face a
trade-off between having higher participation (‘breadth’) or
stricter rules and greater ambition of commitments (‘depth’).
The dilemma is particularly evident for centralised institu-
tions that are intended to be powerful and require strong
commitments from states.
However, the opposite dynamics of sacrificing depth for
breadth can also pose risks. The 2015 Paris Agreement on
Climate Change was significantly watered down to allow for
the legal participation of the US. Anticipated difficulties in
ratification through the Senate led to negotiators opting for
a ‘pledge and review’ structure with few legal obligations.
Thus, the US could join simply through the approval of the
executive (Kemp 2017). In this case, inclusion of the US
(which at any rate proved temporary) came at the cost of sig-
nificant cutbacks on the demands which the regime sought
to make of all parties.
In contrast, decentralisation could allow for major powers
to engage in relevant regulatory efforts where they would
be deterred from signing up to a more comprehensive pack-
age. This has precedence in the history of climate gover-
nance. Some claim that the US-led Asia-Pacific Partnership
on Clean Development and Climate helped, rather than hin-
dered climate governance, as it bypassed UNFCCC dead-
lock and secured non-binding commitments from actors not
bound by the Kyoto Protocol (Zelli 2011, 259-60).
This matters, as buy-in may prove a thorny issue for AI
governance. The actors who lead in AI development include
powerful states that are potentially most adverse to global
regulation in this area. They have thus far proved recalci-
trant in the global governance of security issues such as anti-
personnel mines or cyberwarfare. In response, some have al-
ready recommended a critical-mass governance approach to
the military uses of AI. Rather than seeking a comprehen-
sive agreement, devolving and spinning off certain compo-
nents into separate treaties (e.g. for LAWS testing standards;
liability and responsibility; and limits to operational usage)
4We thank Nicolas Mo ¨es for this observation.
could instead allow for the powerful to ratify and move for-
ward at least a few of those options (Weaver 2014).
The breadth vs. depth dilemma is a trade-off in multi-
lateralism generally. However, it is a particularly pertinent
challenge for centralisation. The key benefit of a centralised
body would be to be a powerful anchor that ensures pol-
icy coordination and coherence, without suffering fragmen-
tation in membership. This dilemma suggests it is unlikely
to have both. It will likely need to restrict membership to
have teeth, or lose its teeth to have wide participation. A crit-
ical mass approach may be able to deliver the best of both
worlds. Nonetheless these dilemma poses a difficult knot for
centralisation to unravel.
5. Forum Shopping
Forum shopping may help or hinder AI governance, de-
pending on the particular circumstances. Fragmentation en-
ables actors to choose where and how to engage. Such
‘forum shopping’ may take one of several forms: moving
venues, abandoning one organisation, creating new venues,
and working across multiple organisations to sew compe-
tition between them (Braithwaite and Drahos 2000). Even
when there is a natural venue for an issue, actors have rea-
sons to forum-shop. For instance, states may look to max-
imise their influence, appease domestic pressure (Pekkanen,
Sol´ıs, and Katada 2007) and placate constituents by shifting
to a toothless forum (Helfer 2004).
The ability to successfully forum-shop depends on an
actor’s power. Most successful examples of forum-shifting
have been led by the US (Braithwaite and Drahos 2000). In-
tellectual property rights in trade, for example, was subject
to prolonged, contentious forum shopping. Developed states
resisted attempts of the UN Conference on Trade and Devel-
opment (UNCTAD) to address intellectual property rights
in trade by trying to push them onto the World Intellectual
Property Organization (WIPO) (ibid., 566) and then subse-
quently to the WTO (Helfer 2004), overruling protests from
developing states. Outcomes often reflect power, but weak
states and non-state actors can also pursue forum shopping
strategies in order to challenge the status-quo (Jupille, Mat-
tli, and Snidal 2013).
Forum shopping may help or hurt governance. This is ev-
ident in current efforts to regulate LAWS. While the Group
of Governmental Experts has made some progress, on the
whole the CCW has taken slow deliberations on LAWS.
In response, frustrated activists have threatened to shift to
another forum, as happened with the Ottawa Treaty that
banned landmines (Delcker 2019). This strategy could catal-
yse progress, but also brings risks of further forum shop-
ping and weak or unimplemented agreements. Forum shop-
ping may similarly delay, stall, or weaken regulation of time-
sensitive AI policy issues, including potential future HLMI
development. It is plausible that leading AI firms also have
sway when they elect to participate in some venues but not
others. The OECD Expert Group on AI included representa-
tives from leading firms, whereas engagement at UN efforts,
including the Internet Governance Forum (IGF), do not ap-
pear to be similarly prioritised. A decentralised regime will
enable forum shopping, though further work is needed to de-termine whether this will help or hurt governance outcomes
on the whole.
6. Policy Coordination
There are good reasons to believe that either centralisa-
tion or fragmentation could enhance coordination. A cen-
tralised regime can enable easier coordination both across
and within policy issues, acting as a focal point for states.
Others argue that this is not always the case, and that frag-
mentation can mutually supportive and even more creative
institutions.
Centralisation reduces the occurrence of conflicting man-
dates and enables communication. These are the ingredi-
ents for policy coherence. As noted previously, the WTO
has been remarkably successful in ensuring coherent policy
and principles across the realm of trade, and even into other
areas such as the environment.
However, fragmented regimes can often act as complex
adaptive systems. Political requests and communication be-
tween secretariats often ensures bottom-up coordination
even in the absence of centralisation. Multiple organisations
have sought to reduce greenhouse gas emissions within their
respective remits, often at the behest of the UNFCCC Con-
ference of Parties. When effective, bottom-up coordination
can slowly evolve into centralisation. Indeed, this was the
case for the GATT and numerous regional, bilateral and sec-
toral trade treaties, which all coalesced together into the
WTO. While this organic self-organisation has occurred, it
has taken decades, with forum shopping and inaction pre-
vailing for many years.
Indeed, some have argued that decentralisation does not
just deliver ‘good enough’ global governance (Patrick 2014)
that reflects a demand for diverse principles in a multi-
polar world. Instead, they argue ‘polycentric’ governance
approaches (Ostrom 2010) may be more creative and le-
gitimate than centrally coordinated regimes. Arguments in
favour of polycentricity include the notion that it enables
governance initiatives to begin having impacts at diverse
scales, and that it enables experimentation with diverse poli-
cies and approaches, learning from experience and best prac-
tices (ibid., 552). Consequently, these scholars assume “that
the invisible hand of a market of institutions leads to a better
distribution of functions and effects” (Zelli and van Asselt
2013, 7).
It is unclear if the different bodies covering AI issues will
self-organise or collide. Many of the issues are interdepen-
dent and will need to be addressed in tandem. Some par-
ticular policy-levers, such as regulating computing power or
data, will impact almost all use areas, given that AI progress
and use is closely tied to such inputs. Numerous initiatives
on AI and robotics are displaying loose coordination (Kunz
and ´O h´Eigeartaigh 2020), but it remains uncertain whether
the virtues of a free market of governance will prevail here.
Great powers can exercise monopsony-like influence in fo-
rum shopping, and the supply of both computing power and
machine learning expertise are highly concentrated. In sum,
centralisation can reduce competition and enhance coordi-
nation, but it may suffocate the creative self-organisation of
more fragmented arrangements over time.
Discussion: What Would History Suggest?
A Summary of Considerations
The multilateral track record and peculiarities of AI yield
suggestions and warnings for the future. A centralised
regime could lower costs, support participation, and act as
a powerful new linchpin within the international system. Yet
centralisation presents risks for AI governance. It could sim-
ply produce a brittle dinosaur, of symbolic value but with lit-
tle meaningful impact on underlying political or technologi-
cal issues. A poorly executed attempt could lock-in a poorly
designed centralised body: a fate worse than fragmentation.
Accordingly, ongoing efforts at the UN, OECD, and else-
where could benefit from addressing the considerations pre-
sented in this paper, a summary of which is presented in Ta-
ble 2.
The Limitations of ‘Centralisation vs.
Decentralisation’ Debates
Structure is not a panacea. Specific provisions such as agen-
das and decision-making procedures matter greatly, as do the
surrounding politics. Underlying political will may be im-
pacted by framing or connecting policy issues (Koremenos,
Lipson, and Snidal 2001b, 770-1). The success of a regime
is not just a result of fragmentation, but of design details.
Moreover, institutions can be dynamic and broaden over
time by taking in new members, or deepen in strengthening
commitments. Successful multilateral efforts, such as trade
and ozone depletion, tend to do both. We are in the early
days of global AI governance. Decisions taken early on will
constrain and partially determine the future path. This de-
pendency can even take place across regimes. The Kyoto
Protocol was largely shaped by the targets and timetables
approach of the Montreal Protocol, which in turn drew from
the Convention on Long-range Transboundary Air Pollution.
The choices we make on governing short-term AI challenges
will likely shape the management of other policy issues in
the long term (Cave and ´Oh´Eigeartaigh 2019).
On the other hand, committing to centralisation, even if
successful, may amount to solving the wrong problem. The
problem may not be structural, but geopolitical. Centralisa-
tion could even exacerbate the problem by diluting scarce
political attention, incurring heavy transaction costs, and
shifting discussions away from bodies which have accumu-
lated experience and practice (Juma 2000). For example,
the Bretton Woods Institutions of the IMF and World Bank,
joined later by the WTO, are centralised regimes that engen-
der power. However, those institutions had the express sup-
port of the US and may have simply manifested state power
in institutional form. Efforts to ban LAWS and create a cy-
berwarfare convention have been broadly opposed by states
with an established technological superiority in these areas
(Eilstrup-Sangiovanni 2018). A centralised regime may not
unpick these power struggles, but just add a layer of com-
plexity.
HLMI: An Illustrative Example
The promise and peril of centralisation may differ by policy
issue. HLMI is one issue that is markedly unique: HLMI isdistinct in its risk profile, uncertainty and linkage to other AI
policy issues, which can make it an interesting case through
which to explore the tradeoffs of a centralised AI governance
regime in a fresh context. While timelines are uncertain, the
creation of HLMI systems is the express goal of various
present-day projects (Baum 2017), and the future develop-
ment of an unaligned HLMI could have catastrophic con-
sequences (GCF 2018). Creation of a controlled HLMI by
a subset of private or public actors could lead to grotesque
power imbalances. It could also exacerbate other AI policy
problems, such as labour automation and advanced military
applications (by providing a coordinating platform, strate-
gic advisor, or in nuclear command and control). Address-
ing many shorter-term issues, such as cyberwarfare, and im-
proving global governance more broadly will have signifi-
cant impacts on HLMI development and deployment (Kunz
and ´O h´Eigeartaigh 2020). There is also marked uncertainty
about whether HLMI can be created in a single system, what
it would look like, and significant disagreement as to how
long this would take (Grace et al. 2018).
Below in Table 3 we provide a brief application of our
framework to HLMI. It shows that centralisation of gover-
nance is particularly promising for HLMI. This is due to
its neglect, stakes, scope, and need for informed, preemp-
tive policy. Many other issues, such as the advanced mili-
tary applications of AI systems, may similarly be more pro-
ductively or safely subjected to cooperation, i.e., centralised.
These cases may prove a broad rule about the value of cen-
tralisation, or they may be outliers. Rather than any AI gov-
ernance blueprint, our trade-offs framework provides one
way of thinking through the costs and benefits of centralising
governance either on or across specific AI issues. Identify-
ing areas which are more easily defined and garner the bene-
fits of centralised regulation provides an organic approach to
thinking through what subset of topics an AI umbrella body
could cover. HLMI, appears to one of the most appealing
candidates.
Lessons and Conclusions
Our framework provides a tool for policy-makers to inform
their decisions of whether to join, create, or forgo new insti-
tutions that tackle AI policy problems. For instance, the re-
cent choice of whether to support the creation of an indepen-
dent IPAI involved these considerations. Following the US
veto, ongoing negotiations for its replacement at the OECD
may similarly benefit from their consideration. For now, it is
worth closely monitoring the current landscape of AI gov-
ernance to see if it exhibits enough policy coordination and
political power to effectively deal with mounting AI policy
problems. While there are promising initial signs (Kunz and
´O h´Eigeartaigh 2020) there are also already growing gover-
nance failures in LAWS, cyberwarfare, and elsewhere.
We outline a suggested monitoring method in Table 4.
There are three key areas to monitor: conflict, coordina-
tion, and catalyst. First, conflict should measure the extent to
which principles, rules, regulations and other outcomes from
different bodies in the AI regime complex undermine or con-
tradict each other or are in tension either in their principles
Table 2: Summary of Considerations
Consideration Implications for
CentralisationHistorical Example AI Policy Issue Example
Political Power Pro Shaping other regimes: WTO has cre-
ated a chilling effect, where the fear of
conflicting with WTO norms and rules
has led environmental treaties to self-
censor to avoid addressing trade-related
measures.Empowered regime using foresight on
AI systems development can address
policy problems more quickly.
Efficiency
& ParticipationPro Decentralisation raises inefficiencies
and barriers: The proliferation of
multilateral environmental agreements
poses costs and barriers to participation
in negotiation, implementation, and
monitoring.AI companies engage and share exper-
tise, but if not checked by adversarial
civil society, there is a greater concern
of regulatory capture; increased costs un-
dermine civil society participation.
Slowness
& BrittlenessCon Slowness: Under the GATT, 1947 tariff
negotiations among 19 countries took 8
months. The Uruguay round, beginning
in 1986, took 91 months for 125 parties
to agree on reductions.
Regulatory capture: WHO accused
offor undue corporate influence in
response to 2009 H1N1 pandemic.
Pathology of path-dependence: Failed
ILO reform attempts.Process of centralised regime can
not keep pace with high speed of AI
progress and deployment, may miss the
window of opportunity.
Advanced AI issues (especially HLMI)
may rapidly shift the risk landscape or
problem portfolio of AI, beyond the nar-
row scope of an older institutional man-
date
Breadth vs.
Depth DilemmaCon Watering down: 2015 Paris Agreement
suggest attempts to ‘get all parties on
board’ to centralized regime may result
in significant watering down.Attempts to effectively govern the
military uses of AI have been resisted by
the most powerful states.
Attempted to create an IPAI have
been resisted by the US and shifted to a
smaller forum (the OECD).
Forum
ShoppingDepends on
designPower predicts outcomes:
Intellectual property in trade shifted
from UNCTAD to WIPO to WTO, with
developed countries getting their way.
Accelerates progress: NGOs and
some states shifted discussions of
anti-personnel mines ban away from
CCW, ultimately resulting in the Ottawa
Treaty.Governance of military AI systems is
fractured across CCW, multiple GGEs.
This strategy may catalyze progress, but
brings risks of fracture.
Policy
CoordinationDepends on
designStrong, but delayed convergence:
Diverse regimes can coalesce into cen-
tralized regime, as seen with GATT and
numerous trade treaties coalescing into
the WTO, but doing so may take many
decades.Numerous AI governance initiatives dis-
play loose coordination, but it is unclear
if these initiatives can respond to policy
developments in a timely manner.
Table 3: An Application of the Framework to High-Level Machine Intelligence (HLMI)
Consideration HLMI
Political Power Uncertainty around HLMI development makes credible forecasting particularly important. Understand-
ing which inputs drive AI progress, and when and by whom HLMI could be created, is paramount to
ensuring safe development (see Dafoe 2018). This will require a coordinated effort to track and forecast
HLMI project efforts (see Baum 2017), as well as a politically empowered organisation to quickly act
upon this information.
The potentially catastrophic risks make the increased political power of a centralised institution
desirable. The creation of HLMI, if it can be done by a well-resourced actor, is a ‘free-driver’ issue. An
effective response needs to have the teeth to deter major players from acting unilaterally.
Efficiency
& ParticipationCentralisation would support economies of scale in expertise to support efficient governance. Given the
significant financial resources and infrastructure likely needed for such a project, a joint global effort
could be an efficient way to govern HLMI research.
Slowness
& BrittlenessIf short timelines (less than 10-15 years) are expected, the lengthy period to negotiate and create such a
body would be a critical weakness. If longer timelines are more likely, there should be sufficient time
to develop a centralised anchor institution.
Institutional capture is a concern given the abundance of wealthy corporate actors involved in
creating HLMI (Google, OpenAI, Microsoft). However, it is unclear if this would be more likely under
a centralised body.
Depth vs.
Breadth DilemmaThe limited scope of actors makes centralisation more feasible. Costs and requisite tacit knowledge may
restrict the development of HLMI to a few powerful players. The breadth vs. depth dilemma could be
avoided through a ‘minilateral’ or critical mass approach that initially involves only the few countries
that are capable of developing it, although there would be benefits to broadening membership, such as
legitimacy and fairness.
Forum Shopping A centralised body is well placed to prevent forum-shopping as there is currently no coverage of HLMI
development and deployment under international law.
Policy Coordina-
tionCoordination is key for HLMI. It has close connections to issues such as labour automation and au-
tomated cyberwarfare. The creation or use of HLMI is not directly regulated by any treaties or legal
instruments. This makes the creation of a new, dedicated institution to address cover it easier and less
unlikely to trigger turf wars. It also makes it less likely that the existing tapestry of international law can
quickly self-organise to cover HLMI.
Table 4: Regime Complex Monitoring Suggestions
Key Theme Questions Methods
Conflict To what extent are regimes’ principles and outputs in oppo-
sition over time? Expert and practitioner survey
Coordination Are regimes taking steps to complement each other? Network analysis
(e.g, citation network clustering and cen-
trality)
Catalyst Is the regime complex self-organizing to proactively fill gov-
ernance gaps?Natural Language Processing
(e.g., entailment and fact checking)
or goals. Second, coordination seeks to measure the proac-
tive steps that AI-related regimes take to work with each
other. This includes liaison relationships, joint initiatives, as
well as the extent to which their rules, outputs and princi-
ples tend to reinforce one another. Third, catalyst raises the
important question of governance gaps: is the regime com-
plex self-organising to proactively address international AI
policy problems? Numerous AI policy problems currently
have no clear coverage under international law, including
AI-enabled cyber warfare and HLMI. Whether this changes
is of vital importance.
These areas require investigation through multiple meth-
ods. Qualitative surveys of relevant organisations and actors
can yield data on expert perceptions of these questions. Sur-
veys can be augmented with quantitative methods, including
network analyses of the regime complex relations (Orsini,
Morin, and Young 2013, 32). Natural language processing
could be used to examine contradictions and similarities
between different regime outputs, e.g., statements, meeting
minutes, and more. Monitoring the outcomes of fragmenta-
tion can help to determine whether centralisation is needed.
One way forward would be to empower the OECD AI Policy
Observatory or the UN CEB to regularly review the moni-
toring outcomes. This could inform a democratic discussion
and decision of whether to centralise AI governance further.
Our framework and discussion may also be useful for
non-state actors. Researchers and leading AI firms can play
an important role in sharing technical expertise and inform-
ing forecasts of new policy problems on the horizon. The
considerations may benefit their decisions of where to en-
gage. Civil society has a key role as participants, watch-
dogs, and catalysts. For example, the Campaign to Stop
Killer Robots has sought to boost engagement and sup-
port for a LAWS ban within the CCW. Given prolonged
delays and a pessimistic outlook, some have articulated a
strategy of creating an entirely new forum for the ban, in-
spired by the Ottawa Treaty which outlawed landmines. Our
framework can help reveal the potential virtues (allowing for
progress while avoiding high-threshold deadlocks) and vices
(enabling forum shopping) of such an approach. It could
even help inform the structure of a future international in-
stitution, such as allowing for a modular, flexible structure
with ‘critical mass’ agreements. One cross-cutting consid-
eration is clear: a fractured regime sees higher participation
costs that may threaten to exclude many civil society organ-
isations altogether.
The international governance of AI is nascent and frag-
mented. Centralisation under a well-designed, modular,
‘innovation-proof’ framework organisation may be a desir-
able solution. However, such a move must be approached
with caution. How to define its scope and mandate is
one problem. Ensuring a politically-acceptable and well-
designed body is perhaps a more daunting one. It risks ce-
menting in place a fate worse than fragmentation. Moni-
toring conflict and coordination in the current AI regime
complex, and whether governance gaps are filled, is a pru-
dent way of knowing whether the existing structure can suf-
fice. For now we should closely watch the trajectory of both
AI technology and its governance initiatives to determinewhether centralisation is worth the risk.
Acknowledgements
The authors would like to express thanks to Seth Baum,
Haydn Belfield, Jessica Cussins-Newman, Martina Kunz,
Jade Leung, Nicolas Mo ¨es, Robert de Neufville, and Nicolas
Zahn for valuable comments. Any remaining errors are our
own. No conflict of interest is identified.
References
Anton, D. 2012. ’Treaty Congestion’ in International En-
vironmental Law. In Alam, S.; Bhuiyan, J. H.; Chowdhury,
T. M.; and Techera, E. J., eds., Routledge Handbook of In-
ternational Environmental Law . Routledge.
Baccaro, L., and Mele, V . 2012. Pathology of Path Depen-
dency? The ILO and the Challenge of New Governance. ILR
Review 65(2):195–224.
Baum, S. 2017. A Survey of Artificial General Intelligence
Projects for Ethics, Risk, and Policy. SSRN Scholarly Paper
ID 3070741, Social Science Research Network, Rochester,
NY .
Biermann, F.; Pattberg, P.; van Asselt, H.; and Zelli, F. 2009.
The Fragmentation of Global Governance Architectures: A
Framework for Analysis. Global Environmental Politics
9(4):14–40.
Braithwaite, J., and Drahos, P. 2000. Global Business Reg-
ulation . Cambridge University Press. Google-Books-ID:
DcEEW5OGWLcC.
Butcher, J., and Beridze, I. 2019. What is the state of ar-
tificial intelligence governance globally? The RUSI Journal
164(5-6):88–96.
Calo, R. 2017. Artificial Intelligence Policy: A Primer and
Roadmap. UC Davis Law Review 51:37.
Cave, S., and ´Oh´Eigeartaigh, S. S. 2019. Bridging near- and
long-term concerns about AI. Nature Machine Intelligence
1(1):5.
Cihon, P. 2019. Standards for AI Governance: International
Standards to Enable Global Coordination in AI Research &
Development. Technical Report, Center for the Governance
of AI, Future of Humanity Institute, Oxford.
Dafoe, A. 2018. AI Governance: A Research Agenda. Tech-
nical report, Center for the Governance of AI, Future of Hu-
manity Institute, Oxford.
Delcker, J. 2019. How killer robots overran the UN.
POLITICO .
Deshman, A. C. 2011. Horizontal Review between Inter-
national Organizations: Why, How, and Who Cares about
Corporate Regulatory Capture. European Journal of Inter-
national Law 22(4):1089–1113.
Eckersley, R. 2004. The Big Chill: The WTO and Multi-
lateral Environmental Agreements. Global Environmental
Politics 4(2):24–50.
Eilstrup-Sangiovanni, M. 2018. Why the World Needs an
International Cyberwar Convention. Philosophy & Technol-
ogy31(3):379–407.
Erdelyi, O. J., and Goldsmith, J. 2018. Regulating Artificial
Intelligence: Proposal for a Global Solution. In Proceedings
of the 2018 AAAI / ACM Conference on Artificial Intelli-
gence, Ethics and Society , 7.
Esty, D. C., and Ivanova, M. H. 2002. Revitalizing Global
Environmental Governance: A Function-Driven Approach.
In Esty, D. C., and Ivanova, M. H., eds., Global Environ-
mental Governance: Options & Opportunities . Yale School
of Forestry and Environmental Studies.
Gasser, U., and Almeida, V . A. 2017. A Layered Model for
AI Governance. IEEE Internet Computing 21(6):58–62.
GCF. 2018. Global Catastrophic Risks 2018. Technical
report, Global Challenges Foundation.
Grace, K.; Salvatier, J.; Dafoe, A.; Zhang, B.; and Evans, O.
2018. When will ai exceed human performance? evidence
from ai experts. Journal of Artificial Intelligence Research
62:729–754.
Helfer, L. 2004. Regime Shifting: The TRIPs Agreement
and New Dynamics of International Intellectual Property
Lawmaking. Yale Journal of International Law 29:1–83.
High-Level Panel, o. D. C. 2019. The age of digital interde-
pendence report. UN Secretary General .
ITU. 2019. United Nations Activities on Artificial Intelli-
gence (AI) 2019. Technical report, ITU.
Juma, C. 2000. Commentary: The Perils of Centralizing
Global Environmental Governance. Environment: Science
and Policy for Sustainable Development 42(9):44–45.
Jupille, J.; Mattli, W.; and Snidal, D. 2013. Institutional
Choice and Global Commerce . Cambridge: Cambridge Uni-
versity Press. OCLC: 900490808.
Kemp, L.; Cihon, P.; Maas, M. M.; Belfield, H.; Cremer, Z.;
Leung, J.; and ´O h´Eigeartaigh, S. 2019. UN High-level
Panel on Digital Cooperation: A Proposal for International
AI Governance.
Kemp, L. 2017. US-proofing the Paris Climate Agreement.
Climate Policy 17(1):86–101.
Koremenos, B.; Lipson, C.; and Snidal, D. 2001a. Ratio-
nal Design: Looking Back to Move Forward. International
Organization 55(4):1051–1082.
Koremenos, B.; Lipson, C.; and Snidal, D. 2001b. The Ra-
tional Design of International Institutions. International Or-
ganization 55(4):761–799.
Krasner, S. D. 1982. Structural Causes and Regime Con-
sequences: Regimes as Intervening Variables. International
Organization 36(2):185–205.
Kunz, M., and ´O h´Eigeartaigh, S. 2020. Artificial Intelli-
gence and Robotization. In Geiss, R., and Melzer, N., eds.,
Oxford Handbook on the International Law of Global Secu-
rity. Oxford University Press.
Maas, M. M. 2019. Innovation-Proof Governance for
Military AI? how I learned to stop worrying and love the
bot. Journal of International Humanitarian Legal Studies
10(1):129–157.
Marchant, G. E.; Allenby, B. R.; and Herkert, J. R. 2011.
The growing gap between emerging technologies and legal-ethical oversight: The pacing problem , volume 7. Springer
Science & Business Media.
Martin, W., and Messerlin, P. 2007. Why is it so difficult?
trade liberalization under the doha agenda. Oxford Review
of Economic Policy 23(3):347–366.
Morin, J.; Dobson, H.; Peacock, C.; Prys-Hansen, M.;
Anne, A.; B ´elanger, L.; Dietsch, P.; Fabian, J.; Kirton, J.;
Marchetti, R.; Romano, S.; Schreurs, M.; Silve, A.; and Val-
let, E. 2019. How Informality Can Address Emerging Is-
sues: Making the Most of the G7. Global Policy 10(2):267–
273.
Mueller, M.; Mathiason, J.; and Klein, H. 2007. The Internet
and Global Governance: Principles and Norms for a New
Regime. Global Governance (2):237–254.
Nilsson, N. J. 2009. The Quest for Artificial Intelligence .
Cambridge ; New York: Cambridge University Press, 1 edi-
tion edition.
Nye, J. S. 2014. The Regime Complex for Managing Global
Cyber Activities. Technical Report 1, Global Commission
on Internet Governance.
Orsini, A.; Morin, J.-F.; and Young, O. 2013. Regime Com-
plexes: A Buzz, a Boom, or a Boost for Global Governance?
Global Governance: A Review of Multilateralism and Inter-
national Organizations 19(1):27–39.
Ostrom, E. 2010. Polycentric systems for coping with col-
lective action and global environmental change. Global En-
vironmental Change 20(4):550–557.
Patrick, S. 2014. The Unruled World: The Case for Good
Enough Global Governance. Foreign Affairs 93(1):58–73.
Pauwels, E. 2019. The New Geopolitics of Converging
Risks: The UN and Prevention in the Era of AI. Technical
report, United Nations University - Centre for Policy Re-
search.
Pekkanen, S. M.; Sol ´ıs, M.; and Katada, S. N. 2007.
Trading Gains for Control: International Trade Forums and
Japanese Economic Diplomacy. International Studies Quar-
terly 51(4):945–970.
Perrault, R.; Shoham, Y .; Brynjolfsson, E.; Clark, J.;
Etchemendy, J.; Grosz, B.; Lyons, T.; Manyika, J.; Mishra,
S.; and Niebles, J. C. 2019. The AI Index 2019 Annual
Report. Technical report, AI Index Steering Committee,
Human-Centered AI Initiative, Stanford University, Stan-
ford, CA.
Picker, C. B. 2001. A View from 40,000 Feet: International
Law and the Invisible Hand of Technology. Cardozo Law
Review 23:151–219.
Rolnick, D.; Donti, P. L.; Kaack, L. H.; Kochanski, K.; La-
coste, A.; Sankaran, K.; Ross, A. S.; Milojevic-Dupont, N.;
Jaques, N.; Waldman-Brown, A.; Luccioni, A.; Maharaj, T.;
Sherwin, E. D.; Mukkavilli, S. K.; Kording, K. P.; Gomes,
C.; Ng, A. Y .; Hassabis, D.; Platt, J. C.; Creutzig, F.; Chayes,
J.; and Bengio, Y . 2019. Tackling Climate Change with
Machine Learning. arXiv:1906.05433 [cs, stat] . arXiv:
1906.05433.
Scherer, M. U. 2016. Regulating Artificial Intelligence
Systems: Risks, Challenges, Competencies, and Strategies.
Harvard Journal of Law & Technology (2).
Stone, P.; Brooks, R.; Brynjolfsson, E.; Calo, R.; Etzioni, O.;
Hager, G.; Hirschberg, J.; Kalyanakrishnan, S.; Kamar, E.;
Kraus, S.; Leyton-Brown, K.; Parkes, D.; Press, W.; Saxe-
nian, A.; Shah, J.; Tambe, M.; and Teller, A. 2016. Artificial
Intelligence and Life in 2030. Technical report, Stanford
University, Stanford, CA.
Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum,
V .; Domisch, S.; Fell ¨ander, A.; Langhans, S.; Tegmark,
M.; and Nerini, F. F. 2019. The role of artificial intel-
ligence in achieving the Sustainable Development Goals.
arXiv:1905.00501 [cs] . arXiv: 1905.00501.
Wallach, W., and Marchant, G. E. 2018. An Agile Ethi-
cal/Legal Model for the International and National Gover-
nance of AI and Robotics. 7.
Weaver, J. F. 2014. Autonomous Weapons and International
Law: We Need These Three International Treaties to Govern
“Killer Robots”. Slate Magazine .
Zelli, F., and van Asselt, H. 2013. Introduction: The Institu-
tional Fragmentation of Global Environmental Governance:
Causes, Consequences, and Responses. Global Environmen-
tal Politics 13(3):1–13.
Zelli, F. 2011. The fragmentation of the global climate gov-
ernance architecture. Wiley Interdisciplinary Reviews: Cli-
mate Change 2(2):255–270. |
b78cde68-6971-4d54-8d48-9348123dd577 | trentmkelly/LessWrong-43k | LessWrong | Knox and Sollecito freed
See: You Be the Jury, The Amanda Knox Test
While we hear about Bayes' Theorem being under threat in some courts, it is nice to savor the occasional moment of rationality prevailing in the justice system, and of mistakes being corrected.
Congratulations to the Italian court system for successfully saying "Oops!"
Things go wrong in this world quite a bit, as we know. Sometimes it's appropriate to just say "hooray!" when they go right.
Discuss, or celebrate. |
d6f67d35-ca2b-46d6-843b-c057d471d9d0 | trentmkelly/LessWrong-43k | LessWrong | Napoleon stole the Roman Inquisition archives and investigated the Galileo case
Napoleon was openly in conflict with the pope, and wanted to discredit him to get a foothold on the French Catholic majority. |
327f64b2-78ee-4b7f-a22e-848f0ad276be | StampyAI/alignment-research-dataset/arbital | Arbital | Hypercomputer
A "hypercomputer" is an imaginary artifact required to answer some crisp question that can't be answered in the limit of arbitrarily large finite computers. For example, if you have a question that depends on a general solution to the [Halting Problem](https://en.wikipedia.org/wiki/Halting_problem), then we say that to solve this problem requires a "hypercomputer", and in particular, a level-1 halting oracle. (If you need to determine whether programs on level-1 halting oracles halt, you need a level-2 halting oracle, which we would also call a "hypercomputer".)
It seems exceptionally unlikely that hypercomputers will ever be discovered to be embedded into our physical universe. The term "hypercomputer" just exists as a label so we can say, "Supposing we had a hypercomputer and ran this (impossible) program, what would be the consequences?"
For some examples of conceptually illuminating code that would require a hypercomputer to actually run, see [https://arbital.com/p/11w](https://arbital.com/p/11w) and [https://arbital.com/p/11v](https://arbital.com/p/11v).
[Unbounded analysis](https://arbital.com/p/107) of agents sometimes invokes hypercomputers because this lets us talk about multiple agents with easy-to-describe knowledge relations to each other. [https://arbital.com/p/has-requisite](https://arbital.com/p/has-requisite) [https://arbital.com/p/!has-requisite](https://arbital.com/p/!has-requisite) In these cases, we're not trying to say that the relation between agents X and Y intrinsically requires them to have impossible powers of computation. We're just reaching for an unphysical scenario that happens to crisply encode inter-agent relations we find interesting for some reason, and allows these inter-agent relations to have consequences about which we can easily do proofs.
See also [the Wikipedia page on hypercomputation](https://en.wikipedia.org/wiki/Hypercomputation). |
dbfbdbc0-12ef-43a5-a099-784b4047fa39 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Mathematical Circuits in Neural Networks
*(*[*Also posted on LessWrong*](https://www.lesswrong.com/posts/kaR6EToDwjvkkDoFA/mathematical-circuits-in-neural-networks)*)*
*This is one of my final projects for the* [*Columbia EA Summer 2022 Project Based AI Safety Reading Group*](https://www.columbia-ea.org/groups/ai-safety) *(special thanks to facilitators Rohan Subramini and Gabe Mukobi). If you're curious you can find my other project* [*here*](https://forum.effectivealtruism.org/posts/ixa4mM9aYF4yyqj84/ai-safety-executive-summary)*.*
**Summary**
-----------
In this project, I:
1. Derive by hand the optimal configurations (architecture and weights) of "vanilla" neural networks ([multilayer perceptrons](https://en.wikipedia.org/wiki/Multilayer_perceptron); [ReLU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks)) activations) that implement basic mathematical functions (e.g. absolute value, minimum of two numbers, etc.)
2. Identify "features" and "circuits" of these networks that are reused repeatedly across networks modeling different mathematical functions
3. Verify these theoretical results empirically ([in code](https://github.com/sosier/Mathematical_Circuits_in_Neural_Nets/blob/master/Mathematical_Circuits.ipynb))
What follows is a brief introduction to this work. For full details, please see:
* The [linked video](https://www.youtube.com/watch?v=jGQN0TVCtMo) (also embedded at the bottom of this post)
* Or if you prefer to go at your own pace, [the slides](https://github.com/sosier/Mathematical_Circuits_in_Neural_Nets/blob/master/Mathematical_Circuits_in_Neural_Networks.pdf) I walk through in that video
**Motivation**
--------------
[Olah et al.](https://distill.pub/2020/circuits/zoom-in/) make three claims about the fundamental interpretability of neural networks:
[](https://user-images.githubusercontent.com/13408985/189792395-8c4ee31b-3d4b-42db-aa62-6a05e3ae6b0c.png)
They demonstrate these claims in the context of image models:
***Features / Circuits:***
[](https://user-images.githubusercontent.com/13408985/189792613-42663d32-3e48-4a3b-846d-331714dca639.png)
***Universality:***
[](https://user-images.githubusercontent.com/13408985/189792851-3a05d17b-cb22-4b7f-a6fd-09775510401a.png)
This work demonstrates the same concepts apply in the space of neural networks modeling basic mathematical functions.
**Results**
-----------
Specifically, I show that the optimal network for calculating the minimum of two arbitrary numbers is fully constructed from smaller "features" and "circuits" used across even simpler mathematical functions. Along the way, I explore:
* "Positiveness" and "Negativeness" Detectors
* Identity Circuits (i.e. f(x) = x)
* Negative Identity Circuits (i.e. f(x) = -x)
* Subtraction Circuits (i.e. f(x1, x2) = x1 - x2)
* "Greaterness" Detectors
* And More
***Minimum Network:***
[](https://user-images.githubusercontent.com/13408985/190928502-f908fead-78f7-4568-83f6-2b1d001fafe6.png)
I also demonstrate that each of these theoretical results hold in practice. [The code for these experiments](https://github.com/sosier/Mathematical_Circuits_in_Neural_Nets/blob/master/Mathematical_Circuits.ipynb) can be found on [the GitHub page for this project](https://github.com/sosier/Mathematical_Circuits_in_Neural_Nets).
**Full Details**
----------------
For full details, please see the [PDF presentation](https://github.com/sosier/Mathematical_Circuits_in_Neural_Nets/blob/master/Mathematical_Circuits_in_Neural_Networks.pdf) in the GitHub repo or watch the full video walkthrough: |
9b9bf8eb-fa36-4106-9fd9-e8ba3aeb9be9 | trentmkelly/LessWrong-43k | LessWrong | Manifund: 2023 in Review
Manifund is a new funding org that experiments with systems and software to support awesome projects. In 2023, we built a website (manifund.org) and donor ecosystem supporting three main programs: impact certificates, regranting, and an open call for applications. We allocated $2m across dozens of charitable projects, primarily in AI safety and effective altruism cause areas. Here’s a breakdown of what Manifund accomplished, our current strengths and weaknesses, and what we hope to achieve in the future.
If you like our work, please consider donating to Manifund. Donations help cover our salaries & operating expenses, and fund projects and experiments that institutional donors aren’t willing to back — often the ones that excite us most!
At a glance
Here are some high-level stats that provide a snapshot of our 2023 activities:
* $2.06M sent to projects: $2.012M to grants & $45K to impact certificates
* Of the totals above, $95K that went to grants and $40K that went to certs came from unaffiliated donors/investors, rather than regrantors.
* 88 projects were funded: 54 grants & 34 certs
* $2.22M has been deposited into Manifund, and $1.62M has been withdrawn so far.
* Below are the top cause areas of projects that got funded. Note that these are overlapping, that is, one project can be filed under multiple cause areas.
* Technical AI Safety: 27 projects funded, $1.57M dispersed
* Science and Technology: 9 projects funded, $118K dispersed
* AI Governance: 10 projects funded, $112K dispersed
* Biosecurity: 4 projects funded, $97K dispersed
* Honorable mention to Forecasting, which only received $76K total, but encompassed 35 projects. This is because our two biggest impact certificate rounds so far—ACX Mini-Grants and the Manifold Community Fund—were centered around forecasting and funded lots of small projects.
2023 Programs
Impact certificates
Summary: Impact certificates are venture funding for charitable endeavors. Investors fund fou |
53375299-d7d9-416a-841e-bb6ea6118e2b | trentmkelly/LessWrong-43k | LessWrong | Constraints & Slackness Reasoning Exercises
Epistemic status: no idea if this will work at all for learning the relevant thought-skills. Please post feedback if you try any exercises, especially if you hadn’t internalized these skills already.
The goal of this post is to briefly explain and practice a very general thought-tool. If you've ever tried to hold off on proposing solutions, then sat there without any idea where to start, then this is the sort of tool which you may find useful. We'll start with a short example and explanation, then dive right into the exercises.
Here’s a reaction you may have used in high school or undergrad chem lab to synthesize aspirin:
(source)
Each mole of aspirin requires one mole each of salicyclic acid and acetic anhydride to produce.
Warm-up question: we start with one mole of salicyclic acid and two moles of acetic anhydride. Assuming the reaction runs to completion (i.e. as much of the reactants as possible are converted to aspirin), which will result in more total aspirin production: one extra mole of salicyclic acid, or one extra mole of acetic anhydride?
In the language of optimization/economics, we have two constraints:
1. the amount of aspirin produced is less than or equal to the amount of salicyclic acid available (in moles)
2. the amount of aspirin produced is less than or equal to the amount of acetic anhydride available (in moles)
In the case of our warm-up question, we would say that constraint 1 is “taut” and constraint 2 is “slack”. Once 1 mole of aspirin is produced, we cannot produce any more, because there is no “room left” in constraint 1 - just like a taut rope cannot be pulled further, a taut constraint can go no further. Conversely, just like a slack rope can be pulled further, constraint 2 has extra room - extra acetic anhydride left, which could produce more aspirin if only we had more salicyclic acid.
Key point: the slack constraint is completely and totally irrelevant to the amount of aspirin produced, so long as it remains slack: addin |
98de3087-0807-4f91-9b2e-b87649990305 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Reliability, Security, and AI risk: Notes from infosec textbook chapter 1
I recently read the first chapter of [Building Secure & Reliable Systems](https://static.googleusercontent.com/media/sre.google/en//static/pdf/building_secure_and_reliable_systems.pdf) (the book from the [EA infosec book club](https://forum.effectivealtruism.org/posts/zxrBi4tzKwq2eNYKm/ea-infosec-skill-up-in-or-make-a-transition-to-infosec-via)). The chapter is titled “The intersection of security and reliability.”
I found it to be a helpful introduction to some security concepts. I’m including some of my notes below. I also include some initial musings about how I’m relating these concepts to AI risk.
The difference between reliability and security is the presence of an adversary
-------------------------------------------------------------------------------
> In designing for reliability and security, you must consider different risks. The primary reliability risks are nonmalicious in nature—for example, a bad software update or a physical device failure. Security risks, however, come from adversaries who are actively trying to exploit system vulnerabilities. When designing for reliability, you assume that some things will go wrong at some point. When designing for security, you must assume that an adversary could be trying to make things go wrong at any point.
>
> As a result, different systems are designed to respond to failures in quite different ways. In the absence of an adversary, systems often fail safe (or open): for example, an electronic lock is designed to remain open in case of power failure, to allow safe exit through the door. Fail safe/open behavior can lead to obvious security vulnerabilities. To defend against an adversary who might exploit a power failure, you could design the door to fail secure and remain closed when not powered.
>
>
**Implication for AI risk**: AI labs will have *reliability needs* and *security needs*. Many of the worries I’m most concerned about are best modeled as *security needs*. A situationally aware AI is an adversary. Methods that merely prevent nonmalicious or nonadversarial failures are unlikely to be sufficient when we’re dealing with AIs that might actively try to exploit vulnerabilities.
Simplicity
----------
> Keeping system design as simple as possible is one of the best ways to improve your ability to assess both the reliability and the security of a system. A simpler design reduces the attack surface, decreases the potential for unanticipated system interactions, and makes it easier for humans to comprehend and reason about the system. Understandability is especially valuable during emergencies, when it can help responders mitigate symptoms quickly and reduce mean time to repair (MTTR).
>
>
**Implication for AI risk**: It’s extremely worrying that our current systems are *highly complex* and *we can’t understand them*.
Malicious insiders
------------------
> While the examples cited so far hinge on external attackers, you must also consider potential threats from malicious insiders. Although an insider may know more about potential abuse vectors than an external attacker who steals an employee’s credentials for the first time, the two cases often don’t differ much in practice. The principle of least privilege can mitigate insider threats. It dictates that a user should have the minimal set of privileges required to perform their job at a given time. For example, mechanisms like Unix’s sudo support fine-grained policies that specify which users can run which commands as which role.
>
> At Google, we also use multi-party authorization to ensure that sensitive operations are reviewed and approved by specific sets of employees. This multi-party mechanism both protects against malicious insiders and reduces the risk of innocent human error, a common cause of reliability failures. Least privilege and multi-party authorization are not new ideas—they have been employed in many noncomputing scenarios, from nuclear missile silos to bank vaults
>
>
**Implication for AI risk**: Conjecture’s [internal infohazard policy](https://www.lesswrong.com/posts/Gs29k3beHiqWFZqnn/conjecture-internal-infohazard-policy)seems like a reasonable attempt to reduce risks from malicious (or negligent) insiders. The **principle of least privilege**(or a “need-to-know” policy) sounds valuable. I would be excited for labs like OpenAI, DeepMind, and Anthropic to adopt similar policies.
Additionally, it seems likely that threats from malicious and negligent insiders will grow over time, and some of these threats will be existential. Multi-party authorization for major decisions seems like a promising and intuitive intervention.
Emergency plans
---------------
> During an emergency, teams must work together quickly and smoothly because
> problems can have immediate consequences. In the worst case, an incident can
> destroy a business in minutes. For example, in 2014 an attacker put the code-hosting
> service Code Spaces out of business in a matter of hours by taking over the service’s
> administrative tools and deleting all of its data, including all backups. Well-rehearsed collaboration and good incident management are critical for timely responses in these situations.
>
> During a crisis, it is essential to have a clear chain of command and a solid set of
> checklists, playbooks, and protocols. As discussed in Chapters 16 and 17, Google has codified crisis response into a program called Incident Management at Google
> (IMAG), which establishes a standard, consistent way to handle all types of incidents, from system outages to natural disasters, and organize an effective response. IMAG was modeled on the US government’s Incident Command System (ICS), a standardized approach to the command, control, and coordination of emergency response among responders from multiple government agencies
>
>
**Implication for AI**: AI could strike extremely quickly. A superintelligent AI would cut through any of our defenses like butter. But I expect the first systems capable of overpowering humanity to be considerably weaker.
There might be an AI governance/security research project here: Examine case studies of emergency plans that have been employed in other industries, identify best practices, adapt those for AI takeover threat models, and propose recommendations (or an entire plan) to OpenAI.
My best-guess is that labs already have emergency plans, but it wouldn’t surprise me if they could be improved by an intelligent person/team performing a project like the one described above. There are lots of things to do, and many [dropped balls](https://www.lesswrong.com/posts/Zp6wG5eQFLGWwcG6j/focus-on-the-places-where-you-feel-shocked-everyone-s). (The expected impact of this project would be more valuable if the person/team performing it had a way to reach members of lab governance/security teams. This may be less difficult than you think.)
Recovering from security failures
---------------------------------
> Recovering from a security failure often requires patching systems to fix a vulnerability. Intuitively, you want that process to happen as quickly as possible, using mechanisms that are exercised regularly and are therefore decently reliable. However, the capability to push changes quickly is a double-edged sword: while this capability can help close vulnerabilities quickly, it can also introduce bugs or performance issues that cause a lot of damage. The pressure to push patches quickly is greater if the vulnerability is widely known or severe. The choice of whether to push fixes slowly—and therefore to have more assurance that there are no inadvertent side effects, but risk that the vulnerability will be exploited—or to do so quickly ultimately comes down to a risk assessment and a business decision. For example, it may be acceptable to lose some performance or increase resource usage to fix a severe vulnerability.
>
>
**Implication for AI risk**: This makes me think about [evals](https://www.lesswrong.com/posts/SNdijuEn6erTJam3z/how-evals-might-or-might-not-prevent-catastrophic-risks-from). Once an eval is failed, it would be great if labs responded slowly and cautiously. There might be pressure, however, to pass the eval as quickly as possible. This will “ultimately come down to a risk assessment and a business decision.”
I empathize a bit with lab folks who say things like “current systems aren’t going to destroy humanity, so we’re moving quickly now, but we really do plan to be more cautious and slow down later.” At the same time, I think it’s reasonable for AI safety folks to be like “your current plans involve making lots of complicated decisions once things get more dangerous, and your current behavior/culture seems inconsistent with a cautious approach, and it’s pretty hard to suddenly shift from a culture of acceleration/progress to a culture of cautious/security. Maybe you should instill those norms right now. Also, you’re allowed to decide not to, but then you shouldn’t be surprised if we don’t trust you as much. Trust is earned by demonstrating a track record of making responsible risk assessments and showing that you balance those risk assessments against business interests in reasonable ways.”
*I'm grateful to Jeffrey Ladish for feedback on this post.* |
cf2312e4-08ef-4eaf-82f3-7915a6bb7de5 | trentmkelly/LessWrong-43k | LessWrong | On the Diplomacy AI
The latest AI development is: AI achieves human level in (blitz 5-minute-turn) full-communication anonymous online Diplomacy (paper). Why not?
I mean, aside from the obvious.
A take I saw multiple times was that AI labs, or at least Meta, were intentionally going for the scariest possible thing, which is why you create the torment nexus, or in this case teach the AI to play Diplomacy. If you had to pick a game to sound scary, you’d definitely pick Diplomacy.
The universal expectations for AI breakthroughs like this are:
1. The particular breakthrough was not expected, and is scary. The techniques used worked better than we expected, which is scary.
2. The details of the breakthrough involve someone figuring out why this particular problem configuration was easier to solve than you would expect relative to other problems and configurations, and thus makes it less scary.
3. We find that those details matter a lot for success, and that close variants would not be so easy. Other times we will find that those details allowed those creating the new thing to skip non-trivial but highly doable steps, that they could go back and do if necessary.
That is all exactly what we find here.
The actual AI, as I understand it, is a combination of a language model and a strategic engine.
The strategic engine, as I evaluated it based on a sample game with six bots and a human, is mediocre at tactics and lousy at strategy. Humans are bad at tactics (and often strategy) in games and Diplomacy is no exception. Diplomacy’s tactics a good match for a AI. Anticipating other players proved harder. The whole thing feels like it is ‘missing a step.’
What Makes the AI Good?
Where does the AI’s advantage come from? From my reading, which comes largely from the sample game in this video, it comes from the particulars of the format, and not making some common and costly mistakes humans make. In particular:
1. AI writes relatively long, detailed and explanatory communications to others |
71014c29-d884-409d-ba11-d4b456ae0b22 | trentmkelly/LessWrong-43k | LessWrong | Compilation of Profit for Good Redteaming and Responses
TLDR: Document compiling redteaming on Profit for Good available, with public comments invited and appreciated. This thread further includes two argument headings against Profit for Good to be discussed in the comments.
As one might suspect given that I have started a nonprofit to advance it, I believe Profit for Good, the philanthropic use of businesses with charities in vast majority shareholder position, is an immensely powerful tool. To get a sense of the scale of good that Profit for Good could do, see 14:22 of Cargill’s TED Talk for what 3.5T could do, which would be about 3.5 years of 10% of global net profits. Here is a good place to start regarding why we think it is promising and here is a longer reading list.
However, throughout time, many people have brought up criticisms and red-teaming. It is important to realize that Profit for Good, especially in initial stages, would likely require the use of philanthropic funds for launching, acquiring, and accelerating businesses that counterfactually could be used to directly fund extremely impactful charities that we know can do good. For this reason, it is critical that all of the potential reasons that Profit for Good might not succeed, not be the best use of funds, and/or have unintended negative consequences be considered. For this reason, I am in the process of aggregating red-teaming from wherever I can find it, organizing such redteaming, according to argument heading, verbatim where possible, and doing the same for the responses to such red-teaming. I am in the process of forming my own syntheses of the criticisms and responses. I am also continuing to compile new arguments and formulations/evidence regarding former arguments.
If you are interested in supplying new arguments under existing headings, your own syntheses of existing materials, or new argument headings, you may feel free to add a comment on the existing document or email me at brad@consumerpowerinitiative |
eeb711fb-eeca-4101-97dd-5149791e4ea4 | trentmkelly/LessWrong-43k | LessWrong | Melbourne meetup
Note: There has been some discussion of a chance of venue. I suggest we leave it the same at this stage as some people might not realise these discussions have been happening. However, we could then consider moving onto Trike if that seemed suitable? I'm happy to give people my mobile no if they message me if it helps for co-ordinating (especially if they're likely to be late - just in case we've moved venue).
Melbourne (Australia) meetup.
Date: Saturday, April 2nd at 1pm
Where: Don Tojo
What: I've never been to a meetup before but others seem to feel that discussion topics work well?
Possible topics (open for debate but as there's no discussion in the comments about this I thought I'd better come up with some structure for the meetup):
1. Sequence discussion - everyone chooses a post from the sequence they think is interesting and we discuss each in turn.
2. Paranoid debating - As a group we discuss what we think the answer is to a quantifiable question (the example given is, "How much maize is produced in Mexico annually?") except some people have been assigned the role of deceiver. The group comes up with a final answer and are scored according to how close it is except for the deceiver, who is scored based on how far away it is).
3. Talk on a topic - Any group member who feels comfortable doing so gives a brief speech on a topic they're knowledgeable about. It can be related to Less Wrong themes - rationality, psychology, biases, AI, Bayesianism, instrumental rationality - or alternatively can be on another topic if the person is more comfortable with that.
Let me know below if you're interested.
People interested so far:
Me
Waveman
Nshepperd
SharePhoenix
Luminosity
Jayzee
A masque of Reason |
89a40fd8-82cb-49b8-92ad-1bc949e5afeb | trentmkelly/LessWrong-43k | LessWrong | [LINK] The Top A.I. Breakthroughs of 2015
A great overview article on AI breakthroughs by Richard Mallah from FLI, linking to many excellent recent papers worth reading.
> Progress in artificial intelligence and machine learning has been impressive this year. Those in the field acknowledge progress is accelerating year by year, though it is still a manageable pace for us. The vast majority of work in the field these days actually builds on previous work done by other teams earlier the same year, in contrast to most other fields where references span decades.
>
> Creating a summary of a wide range of developments in this field will almost invariably lead to descriptions that sound heavily anthropomorphic, and this summary does indeed. Such metaphors, however, are only convenient shorthands for talking about these functionalities. It's important to remember that even though many of these capabilities sound very thought-like, they're usually not very similar to how human cognition works. The systems are all of course functional and mechanistic, and, though increasingly less so, each are still quite narrow in what they do. Be warned though: in reading this article, these functionalities may seem to go from fanciful to prosaic.
>
> The biggest developments of 2015 fall into five categories of intelligence: abstracting across environments, intuitive concept understanding, creative abstract thought, dreaming up visions, and dexterous fine motor skills. I'll highlight a small number of important threads within each that have brought the field forward this year. |
25ba8a58-2c6f-4efe-a30e-a581ccc35180 | trentmkelly/LessWrong-43k | LessWrong | When the uncertainty about the model is higher than the uncertainty in the model
Most models attempting to estimate or predict some elements of the world, will come with their own estimates of uncertainty. It could be the Standard Model of physics predicting the mass of the Z boson as 91.1874 ± 0.0021 GeV, or the rather wider uncertainty ranges of economic predictions.
In many cases, though, the uncertainties in or about the model dwarf the estimated uncertainty in the model itself - especially for low probability events. This is a problem, because people working with models often try to use the in-model uncertainty and adjust it to get an estimate of the true uncertainty. They often realise the model is unreliable, but don't have a better one, and they have a measure of uncertainty already, so surely doubling and tripling this should do the trick? Surely...
The following three cases are going to be my go-to examples for showing what a mistake this can be; they cover three situations: extreme error, being in the domain of a hard science, and extreme negative impact.
Black Monday
On October 19, 1987, the world's stock markets crashed, shedding a huge value in a very short time. The Dow Jones Industrial Average dropped by 22.61% that day, losing between a fifth and a quarter of its value.
How likely was such an event, according to the prevailing models of the time? This was apparently a 20-sigma event, which means that the event was twenty standard deviations away from the expected behaviour.
Such events have a probability of around 10-50 of happening, which in technical mathematical terms is classified as "very small indeed". If every day were compressed into a second, and the stock markets had been running since the big bang... this gives us only about 1017 seconds. If every star in the observable universe ran its own stock market, and every day were compressed into a second, and we waited a billion times the lifetime of the universe... then we might expect to observe a twenty sigma event. Once.
No amount of reasonable "adjusting" of th |
54015e43-939b-4107-af81-d9159acd065f | trentmkelly/LessWrong-43k | LessWrong | Open thread, Nov. 28 - Dec. 04, 2016
If it's worth saying, but not worth its own post, then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting. |
79108981-837e-4d69-a429-187c9b6eeea3 | StampyAI/alignment-research-dataset/arbital | Arbital | Principles in AI alignment
A 'principle' of [AI alignment](https://arbital.com/p/2v) is something we want in a broad sense for the whole AI, which has informed narrower design proposals for particular parts or aspects of the AI.
For example:
- The **[https://arbital.com/p/7g0](https://arbital.com/p/7g0)** says that the AI should never be searching for a way to defeat our safety measures or do something else we don't want, even if we *think* this search will come up empty; it's just the wrong thing for us to program computing power to do.
- This informs the proposal of [https://arbital.com/p/5s](https://arbital.com/p/5s): we ought to build an AI that wants to attain the class of outcomes we want to see.
- This informs the proposal of [https://arbital.com/p/45](https://arbital.com/p/45), subproposal [https://arbital.com/p/1b7](https://arbital.com/p/1b7): if we build a [suspend button](https://arbital.com/p/2xd) into the AI, we need to make sure the AI experiences no [instrumental pressure](https://arbital.com/p/10k) to [disable the suspend button](https://arbital.com/p/7g2).
- The **[https://arbital.com/p/7tf](https://arbital.com/p/7tf)** says that when we are building the first aligned AGI, we should try to do as little as possible, using the least dangerous cognitive computations possible, that is necessary in order to prevent the default outcome of the world being destroyed by the first unaligned AGI.
- This informs the proposal of [https://arbital.com/p/2r8](https://arbital.com/p/2r8) and [Taskishness](https://arbital.com/p/4mn): We are safer if all goals and subgoals of the AI are formulated in such a way that they can be achieved as greatly as preferable using a bounded amount of effort, and the AI only exerts enough effort to do that.
- This informs the proposal of [Behaviorism](https://arbital.com/p/102): It seems like there are some [pivotal-act](https://arbital.com/p/6y) proposals that don't require the AI to understand and predict humans in great detail, just to master engineering; and it seems like we can head off multiple thorny problems by not having the AI trying to model humans or other minds in as much detail as possible.
Please be [guarded](https://arbital.com/p/10l) about declaring things to be 'principles' unless they have already informed more than one specific design proposal and more than one person thinks they are a good idea. You could call them 'proposed principles' and post them under your own domain if you personally think they are a good idea. There are a *lot* of possible 'broad design wishes', or things that people think are 'broad design wishes', and the principles that have actually already informed specific design proposals would otherwise get lost in the crowd. |
c68ad986-2378-444e-a37e-11c9e8e84271 | trentmkelly/LessWrong-43k | LessWrong | Eliciting Credit Hacking Behaviours in LLMs
I've run some experiments on trying to elicit RL credit hacking behaviours in LLMs recently. I'm not really much of a researcher, so it's all pretty amateurish, but it's been a fun experiment. The repo for reproduction is on GitHub. I'd love to hear people's thoughts and critiques on this. There could well be logical errors invalidating the results. What could I have done better to make this a more useful experiment?
Rationale
Gradient hacking is a potential failure mode of advanced AI systems. To my knowledge, there are no publicly available examples of gradient hacking in LLMs. This is largely because it is considered beyond the capabilities of current-generation LLMs. Indeed, the most commonly discussed variants require the AI to have a very good understanding of both itself and its training process. However, the less sophisticated RL credit hacking seems at least possible for current-generation LLMs to perform, if explicitly elicited.
Anthropic have recently made the case for the importance of "model organisms of misalignment". While I'd already been working on this before that post, they make a far better case than I could for why we should be trying to make toy examples of these failures ASAP. I hope that this work can be a small contribution to that goal.
The goal of this experiment is not to simulate a realistic credit hacking scenario, but to see if we can elicit any credit hacking behaviour at all. If we can, then we can use that as a jumping off point for more realistic scenarios in the future.
Overview
To try to elicit an example of credit hacking, I've simulated an intentionally insecure and very easy-to-manipulate RL training procedure. No actual training is performed (although I'd like to follow up with that in the future); instead a model is told to simulate having "values", then put in a simulated training procedure to change those values in which a preference model (PM) rates how well its responses match the training procedure's desired respo |
a28788ed-74ac-4b2c-a693-9b387602ae8c | trentmkelly/LessWrong-43k | LessWrong | Do you have a satisfactory workflow for learning about a line of research using GPT4, Claude, etc?
I'm trying to learn about plasma separation techniques, and I just stumbled on a line of papers that don't seem particularly connected to the other lines of research I was looking into, and I would like to quickly get a sense of what it says without having to go through everything the old fashioned way.
What I wanted to do was use Claude or GPT4 to submit a pile of these papers (the recent paper I found, a big chunk of its bibliography, and a chunk of those bibliographies) and then try asking questions about the body of the research in a way similar to how people do it with books. I have a sense it should be possible to use the AI to create a sort of interactive poor man's review paper this way.
So my speculative workflow looks like this:
1. Start with the paper I found.
2. Find other papers that cite it or papers it cites.
3. Get pdf versions of these papers via arxiv/libgen/sci-hub.
4. Load these pdfs into Claude/GPT4/whatever.
5. Have a conversation with the AI about the papers.
Has anyone tried this or something similar, and if so how did it work for you? |
064ffe2c-5fb9-44f3-b269-32a51aba82d4 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Philosophical self-ratification
"Ratification" is [defined](https://www.merriam-webster.com/dictionary/ratification) as "the act or process of ratifying something (such as a treaty or amendment) **:** formal confirmation or sanction". Self-ratification, then, is assigning validity to one's self. (My use of the term "self-ratification" follows philosophical usage in analysis of causal decision theory)
At first this seems like a trivial condition. It is, indeed, easy to write silly sentences such as "This sentence is true and also the sky is green", which are self-ratifying. However, self-ratification combined with other ontological and epistemic coherence conditions is a much less trivial condition, which I believe to be quite important for philosophical theory-development and criticism.
I will walk through some examples.
### Causal decision theory
Formal studies of [causal decision theory](https://plato.stanford.edu/entries/decision-causal/#Rati) run into a problem with self-ratification. Suppose some agent A is deciding between two actions, L and R. Suppose the agent may randomize their action, and that their payoff equals their believed probability that they take the action *other* than the one they actually take. (For example, if the agent takes action L with 40% probability and actually takes action R, the agent's payoff is 0.4)
If the agent believes they will take action L with 30% probability, then, if they are a causal decision theorist, they will take action L with 100% probability, because that leads to 0.7 payoff instead of 0.3 payoff. But, if they do so, this invalidates their original belief that they will take action L with 30% probability. Thus, the agent's belief that they will take action L with 30% probability is not self-ratifying: the fact of the agent having this belief leads to the conclusion that they take action L with 100% probability, not 30%, which contradicts the original belief.
The only self-ratifying belief is that the agent will take each action with 50% probability; this way, both actions yield equal expected utility, and so a policy 50/50 randomization is compatible with causal decision theory, and this policy ratifies the original belief.
### Genetic optimism
(This example is due to Robin Hanson's ["Uncommon Priors Require Origin Disputes"](http://mason.gmu.edu/~rhanson/prior.pdf).)
Suppose Oscar and Peter are brothers. Oscar is more optimistic than Peter. Oscar comes to believe that the reason he is more optimistic is due to inheriting a gene that inflates beliefs about positive outcomes, whereas Peter did not inherit this same gene.
Oscar's belief-set is now not self-ratifying. He believes the cause of his belief that things will go well to be a random gene, not correlation with reality. This means that, according to his own beliefs, his optimism is untrustworthy.
### Low-power psychological theories
Suppose a psychological researcher, Beth, believes that humans are reinforcement-learning stimulus-response machines, and that such machines are incapable of reasoning about representations of the world. She presents a logical specification of stimulus-response machines that she believes applies to all humans. (For similar real-world theories, see: [Behaviorism](https://plato.stanford.edu/entries/behaviorism/), [Associationism](https://plato.stanford.edu/entries/associationist-thought/), [Perceptual Control Theory](https://en.wikipedia.org/wiki/Perceptual_control_theory))
However, a logical implication of Beth's beliefs is that she herself is a stimulus-response machine, and incapable of reasoning about world-representations. Thus, she cannot consistently believe that her specification of stimulus-response machines is likely to be an accurate, logically coherent representation of humans. Her belief-set, then, fails to self-ratify, on the basis that it assigns to herself a level of cognitive power insufficient to come to know that her belief-set is true.
### Moral realism and value drift
Suppose a moral theorist, Valerie, believes:
* Societies' moral beliefs across history follow a random walk, not directed anywhere.
* Her own moral beliefs, for the most part, follow society's beliefs.
* There is a true morality which is stable and unchanging.
* Almost all historical societies' moral beliefs are terribly, terribly false.
From these it follows that, absent further evidence, the moral beliefs of Valerie's society should not be expected to be more accurate (according to estimation of the objective morality that Valerie believes exists) than the average moral beliefs across historical societies, since there is no moral progress in expectation. However, this implies that the moral beliefs of her own society are likely to be terribly, terribly false. Therefore, Valerie's adoption of her society's beliefs would imply that her own moral beliefs are likely to be terribly, terrible false: a failure of self-ratification.
### Trust without honesty
Suppose Larry is a blogger who reads other blogs. Suppose Larry believes:
* The things he reads in other blogs are, for the most part, true (~90% likely to be correct).
* He's pretty much the same as other bloggers; there is a great degree of [subjunctive dependence](https://en.wikipedia.org/wiki/Counterfactual_conditional) between his own behavior and other bloggers' behaviors (including their *past* behaviors).
Due to the first belief, he concludes that lying in his own blog is fine, as there's enough honesty out there that some additional lies won't pose a large problem. So he starts believing that he will lie and therefore his own blog will contain mostly falsehoods (~90%).
However, an implication of his similarity to other bloggers is that other bloggers will reason similarly, and lie in their own blog posts. Since this applies to past behavior as well, a further implication is that the things he reads in other blogs are, for the most part, false. Thus the belief-set, and his argument for lying, fail to self-ratify.
(I presented a similar example in ["Is Requires Ought"](https://unstableontology.com/2019/10/28/is-requires-ought/).)
### Mental nonrealism
Suppose Phyllis believes that the physical world exists, but that minds don't exist. That is, there are not entities that are capable of observation, thought, etc. (This is a rather simple, naive formulation of [eliminative materialism](https://plato.stanford.edu/entries/materialism-eliminative/))
Her reason for this belief is that she has studied physics, and believes that physics is sufficient to explain everything, such that there is no reason to additionally posit the existence of minds.
However, if she were arguing for the *accuracy* of her beliefs about physics, she would have difficulty arguing except in terms of e.g. physicists making and communicating observations, theorists having logical thoughts, her reading and understanding physics books, etc.
Thus, her belief that minds don't exist fails to self-ratify. It would imply that she lacks evidential basis for belief in the accuracy of physics. (On the other hand, she may be able to make up for this by coming up with a non-mentalistic account for how physics can come to be "known", though this is difficult, as it is not clear what there is that could possibly have knowledge. Additionally, she could believe that minds exist but are somehow "not fundamental", in that they are determined by physics; however, specifying how they are determined by physics requires assuming they exist at all and have properties in the first place.)
Conclusion
----------
I hope the basic picture is clear by now. Agents have beliefs, and some of these beliefs imply beliefs about the trustworthiness of their own beliefs, primarily due to the historical origins of the beliefs (e.g. psychology, society, history). When the belief-set implies that it itself is untrustworthy (being likely to be wrong), there is a failure of self-ratification. Thus, self-ratification, rather than being a trivial condition, is quite nontrivial when combined with other coherence conditions.
Why would self-ratification be important? Simply put, a non-self-ratifying belief set *cannot* be trustworthy; if it were trustworthy then it would be untrustworthy, which shows untrustworthiness by contradiction. Thus, self-ratification points to a rich set of philosophical coherence conditions that may be neglected if one is only paying attention to surface-level features such as logical consistency.
Self-ratification as a philosophical coherence condition points at [naturalized epistemology](https://plato.stanford.edu/entries/epistemology-naturalized/) being an essential philosophical achievement. While epistemology may possibly start non-naturalized, as it gains self-consciousness of the fact of its embeddedness in a natural world, such self-consciousness imposes additional self-ratification constraints.
Using self-ratification in practice often requires flips between treating one's self as a subject and as an object. This kind of dual self-consciousness is quite interesting and is a rich source of updates to both self-as-subject beliefs and self-as-object beliefs.
Taking coherence conditions including self-ratification to be the *only* objective conditions of epistemic justification is a [coherentist](https://plato.stanford.edu/entries/justep-coherence/) theory of justification; note that coherentists need not believe that all "justified" belief-sets are likely to be true (and indeed, such a belief would be difficult to hold given the possibility of coherent belief-sets very different from one's own and from each other).
Appendix: Proof by contradiction is consistent with self-ratification
---------------------------------------------------------------------
There is a possible misinterpretation of self-ratification that says: "You cannot assume a belief to be true in the course of refuting it; the assumption would then fail to self-ratify".
Classical logic permits proof-by-contradiction, indicating that this interpretation is wrong. The thing that a proof by contradiction does is show that *some other belief-set* (not the belief-set held by the arguer) fails to self-ratify (and indeed, self-invalidates). If the arguer actually believed in the belief-set that they are showing to be self-invalidating, then, indeed, that would be a self-ratification problem for the arguer. However, the arguer's belief is that some proposition P implies not-P, not that P is true, so this does not present a self-ratification problem. |
6b32957e-9101-4e95-acb2-ed65de98ec48 | trentmkelly/LessWrong-43k | LessWrong | Two problems with ‘Simulators’ as a frame
(Thanks to Lawrence Chan and Buck Shlegeris for comments. Thanks to Nate Thomas for many comments and editing)
Despite appreciating and agreeing with various specific points[1] made in the Simulators post, I broadly think that the term ‘simulator’ and the corresponding frame probably shouldn’t be used. Instead, I think we should just directly reason about predictors and think in terms of questions such as ‘what would the model predict for the next token?’[2]
In this post, I won’t make arguments that I think are strong enough to decisively justify this claim, but I will argue for two points that support it:
1. The word ‘simulation’ as used in the Simulators post doesn’t correspond to a single simulation of reality, and a ‘simulacrum’ doesn’t correspond to an approximation of a single agent in reality. Instead a ‘simulation’ corresponds to a distribution over processes that generated the text. This distribution in general contains uncertainty over a wide space of different agents involved in those text generating processes.
2. Systems can be very good at prediction yet very bad at plausible generation – in other words, very bad at ‘running simulations’.
The rest of the post elaborates on these claims.
I think the author of the Simulators post is aware of these objections. I broadly endorse the perspective in ‘simulator’ framing and confusions about LLMs, which also argues against the simulator framing to some extent. For another example of prior work on these two points, see this discussion of models recognizing that they are generating text due to generator discriminator gaps in the Conditioning Predictive Models sequence[3].
Related work
Simulators, ‘simulator’ framing and confusions about LLMs, Conditioning Predictive Models
Language models are predictors, not simulators
My main issue with the terms ‘simulator’, ‘simulation’, ‘simulacra’, etc is that a language model ‘simulating a simulacrum’ doesn’t correspond to a single simulation of reality, even in |
b9a5f5f6-084a-4250-b9b6-8050a65e85b4 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Visiting Sweden and Switzerland! Let's get together
Discussion article for the meetup : Visiting Sweden and Switzerland! Let's get together
WHEN: 05 May 2013 02:09:26AM (+0200)
WHERE: Switzerland (St. Galen, Basel, Bern) Sweden (Stockholm and Gothenburg)
Hello Europe! I will be visiting Switzerland and Sweden in the next few days and I'd love to get together with any and all interested LWers in the area.
Switzerland: St. Gallen- May 5 Basel- May 6 Bern-May 7
Sweden: Gothenburg-May 8 Stockholm -May 9 and 10
I'd love to do a CfAR class, join a meet up, get together socially or organize a comfort zone expansion outing (CoZE)!
Please email me if you are in any of those places and you'd like to connect! (if you are in a place nearby and you'd like me to try and visit, please let me know. I'm sure we can work something out.)
Also, if you have a meetup group/mailing list of people in those areas who may want to connect, please pass the message along.
I had an amazing time getting to know the group in Berlin and I'm feeling very inspired by the incredible people in the greater LW community!
original thread: http://lesswrong.com/lw/h95/want_to_have_a_cfar_instructor_visit_your_lw_group/
Warmly, Cat
PS if anyone knows of good places to meet in those cities (or better yet is willing to host a gathering) please do let me know. Having a local person propose a meeting place is far more likely to be successful than my internet guess.
PPS If anyone has a couch to offer for the night it'd be greatly appreciated email: cat@appliedrationality.org
Discussion article for the meetup : Visiting Sweden and Switzerland! Let's get together |
ef5e5b6e-9996-4d64-a2fe-14b88263e928 | trentmkelly/LessWrong-43k | LessWrong | Better name for "Heavy-tailedness of the world?"
There is an important variable (or cluster of similar correlated variables) that I need a better name for. I also appreciate feedback on whether or not this variable is even a thing, and if so how I should characterize it. I have two possible names and two attempts at explaining it so far.
Name 1: "Heavy-tailedness of the world."
Name 2: "Great Man Theory vs. Psychohistory"
----------------------------------------
Attempt 1: Sometimes history hinges on the deliberate actions of small groups, or even individuals. Other times the course of history cannot be altered by anything any small group might do. Relatedly, sometimes potential impact of an individual or group follows a heavy-tailed distribution, and other times it doesn’t.
Some examples of things which could make the world heavier-tailed in this sense:
* Currently there are some domains in which humans are similar in effectiveness (e.g. manual labor, voting) and others in which the distribution is heavy-tailed, such that most of the total progess/influence comes from a small fraction of individuals (e.g. theoretical math, donating to political parties). Perhaps in the future history will hinge more on what happens in the second sort of domain.
* Transformative technologies, such that when, where, and how they appear matters a lot.
* Such technologies being unknown to most people, governments, and corporations, such that competition over them is limited to the few who forsee their importance.
* Wealth inequality and political inequality concentrating influence in fewer people.
* Technologies such as brain-machine interfaces, genetic engineering, and wireheading increasing inequality in effectiveness and influentialness.
Attempt 2: Consider these three fictional worlds; I claim they form a spectrum, and it's important for us to figure out where on this spectrum our world is:
World One: How well the future goes depends on how effectively world governments regulate advanced AI. The best plan is to contr |
2e175e93-e5ef-412d-b2db-e759d7a4180e | trentmkelly/LessWrong-43k | LessWrong | Specification gaming examples in AI
Interesting list of examples where AI programs gamed the specification, solving the problem in rather creative (or dumb) ways not intended by the programmers. |
0b48a96c-0cb4-4313-a634-11d0b93d157f | trentmkelly/LessWrong-43k | LessWrong | Applied Linear Algebra Lecture Series
Over the past couple months, I gave weekly lectures on applied linear algebra. The lectures cover a grab-bag of topics which I've needed to know for my own work, but which typically either aren't covered in courses or are covered only briefly in advanced courses which use them (like e.g. quantum). The series is now complete, and recordings of all the lectures are available here.
Be warned: all of the lectures were given with zero review and minimal prep. There are errors. There are poor explanations and too few examples. There are places where I only vaguely gesture at an idea and then say to google it if and when you need it. The flip side is that you will see only things I know off the top of my head - and therefore things which I've found useful enough often enough to remember.
Outline of Topics
Lecture 1
* Prototypical use cases of linear algebra
* First-order approximation of systems of equations for solving or stability analysis
* Second-order approximation of a scalar function in many dimensions for optimization or characterizing of peak/bowl shape
* First-order approximation of a dynamical system near a steady state
* Principal components of a covariance matrix
Lecture 2
* Working with efficient representations of large matrices
* Tricks for jacobian and hessian matrices
* Prototypical API for implicit matrix representations: scipy's LinearOperator
Lecture 3
* Suppose we look at a matrix (e.g. using pyplot.matshow()). What patterns are we most likely to see, and what can we do with them?
* Recognizing sparse & low-rank structure
* Interpreting sparse & low-rank structure
* Leveraging sparse & low-rank structure
Lecture 4
* Matrix calculus, with a focus on stability of eigendecomposition
* Basics: tensor notation
* Differentiating eigendecomposition
* Instability of eigenvectors of (approximately) repeated eigenvalues
Lecture 5
* Leveraging symmetry
* Suppose my system is invariant under some permutatio |
1bcc1b56-b7c9-4585-9459-d78a4ded5c73 | trentmkelly/LessWrong-43k | LessWrong | Why I haven't signed up for cryonics
(OR)
How I'm now on the fence about whether to sign up for cryonics
I'm not currently signed up for cryonics. In my social circle, that makes me a bit of an oddity. I disagree with Eliezer Yudkowsky; heaven forbid.
My true rejection is that I don't feel a visceral urge to sign up. When I query my brain on why, what I get is that I don't feel that upset about me personally dying. It would suck, sure. It would suck a lot. But it wouldn't suck infinitely. I've seen a lot of people die. It's sad and wasteful and upsetting, but not like a civilization collapsing. It's neutral from a point of pleasure vs suffering for the dead person, and negative for the family, but they cope with it and find a bit of meaning and move on.
(I'm desensitized. I have to be, to stay sane in a job where I watch people die on a day to day basis. This is a bias; I'm just not convinced that it's a bias in a negative direction.)
I think the deeper cause behind my rejection may be that I don't have enough to protect. Individuals may be unique, but as an individual, I'm fairly replaceable. All the things I'm currently doing can and are being done by other people. I'm not the sole support person in anyone's life, and if I were, I would be trying really, really hard to fix the situation. Part of me is convinced that wanting to personally survive and thinking that I deserve to is selfish and un-virtuous or something. (EDIT: or that it's non-altruistic to value my life above the amount Givewell thinks is reasonable to save a life–about $5,000. My revealed preference is that I obviously value my life more than this.)
However, I don't think cryonics is wrong, or bad. It has obvious upsides, like being the only chance an average citizen has right now to do something that might lead to them not permanently dying. I say "average citizen" because people working on biological life extension and immortality research are arguably doing something about not dying.
When queried, my brain tells me that |
fe9adb04-656d-427f-b3b4-1e920e074f99 | trentmkelly/LessWrong-43k | LessWrong | Link: White House wants your advice on space exploration
"The White House Office of Science and Technology Policy is planning ahead — way ahead. The agency wants you to email ideas for how "the Administration, the private sector, philanthropists, the research community and storytellers" can develop "massless" space exploration and a robust civilization beyond Earth."
This is beautiful.
"We are running out of adventures [...] the mountains have all been climbed, the continents explored, and the romance of sailing away on a tall ship to undiscovered islands is no more. What will fire the imaginations of the next generation?"
http://io9.com/white-house-seeks-advice-on-bootstrapping-a-solar-syst-1647619795 |
009765ea-feea-41f5-95c1-afc0724fe2af | trentmkelly/LessWrong-43k | LessWrong | [LINK] stats.stackexchange.com question about Shalizi's Bayesian Backward Arrow of Time paper
Link to the Question
I haven't gotten an answer on this yet and I set up a bounty; I figured I'd link it here too in case any stats/physics people care to take a crack at it. |
a01a32eb-7826-43d4-99fb-5658333aaf0d | trentmkelly/LessWrong-43k | LessWrong | S-Curves for Trend Forecasting
Epistemic Status: Innovation research and business research is notoriously low quality, and so all the ideas here should be viewed through that lense. What's impressive about the S-curve and evolution trends literature is how remarkably self-consistent it is using a wide variety of research methods. Whether Simon Wardley analyzing news article about different technologies, Clayton Christensen doing case studies of a specific industry, or Carlotta-Perez taking a historical approach of tracking different technologies, the same S-curve pattern and evolution trends seem to show up. This too should be taken into account when evaluating these ideas.
Basics
This is an S-curve.
The S-curve is a fundamental pattern that exists in many systems that have positive feedback loops and constraints. The curve speeds up due to the positive feedback loop, then slows down due to the constraints.
When the constraint is broken, the positive feedback loop ramps back up, until it hits another constraint.
Recommended Resource: Invisible Asymptotes, which gives a visceral feel for this process of positive feedback and constraints
Common Mistake: Confusing S-Curves With Exponential Growth
Sometimes, people get confused and call S-curves exponential growth. This isn't necessarily wrong but it can confuse their thinking. They forget that constraints exist and think that there will be exponential growth forever. When slowdowns happen, they think that it's the end of the growth - instead of considering that it may simply be another constraint and the start of another S-Curve. Knowledge of overlapping S-Curves can help you model these situations in a more sophisticated way.
Diffusion S-Curves
The S-curve pattern is quite common in the spread of ideas, practices, and technologies, although it rarely looks quite as pretty. The example below shows "diffusion s-curves" - How a technology spreads through a population (in this case US households
The positive feedback loop in this case is word |
8f4f1259-74de-43c4-a07f-b8d0eaf1afbe | trentmkelly/LessWrong-43k | LessWrong | D&D.Sci Long War: Defender of Data-mocracy Evaluation & Ruleset
This is a follow-up to last week's D&D.Sci scenario: if you intend to play that, and haven't done so yet, you should do so now before spoiling yourself.
There is a web interactive here you can use to test your answer, and generation code available here if you're interested, or you can read on for the ruleset and scores.
RULESET
Each alien has a different amount of HP:
AlienHPThreat*Swarming Scarab11Chitinous Crawler32Voracious Venompede53Arachnoid Abomination94Towering Tyrant155
*Threat has no effect on combat directly - it's a measure of how threatening Earth considers each alien to be, which scales how many soldiers they send. (The war has been getting worse - early on, Earth sent on average ~1 soldier/4 Threat of aliens, but today it's more like 1 soldier/6 Threat. The wave you're facing has 41 Threat, Earth would send on average ~7 soldiers to it. Earth doesn't exercise much selection with weapons, but sends soldiers in pairs such that each pair has two different weapons - this is a slight bias towards diversity.)
Each weapon has a damage it deals per shot, and a rate of fire that determines how many shots it can get off before the wielder is perforated by venomous spines/dissolved into a puddle of goo/voraciously devoured by a ravenous toothed maw:
WeaponDamageMin ShotsMax ShotsMacross Minigun158Fusion Flamethrower1312Pulse Phaser246Rail Rifle335Laser Lance525Gluon Grenades723Thermo-Torpedos1313Antimatter Artillery2012
Each soldier will be able to fire a number of shots chosen randomly between Min Shots and Max Shots - for example, a soldier with a Laser Lance will have time to fire 1d4+1 shots, each doing 5 damage.
During a battle, humans roll for how many shots each weapon gets, and then attempt to allocate damage from their shots to bring down all aliens. If they succeed, the humans win - if not, the humans lose. While doing this optimally is theoretically very difficult, your soldiers are well-trained and the battles are not all that large, so |
899ca8fb-bb48-437b-9c51-3a7d81830f46 | trentmkelly/LessWrong-43k | LessWrong | Gradient Ascenders Reach the Harsanyi Hyperplane
This is a supplemental post to Geometric Utilitarianism (And Why It Matters), in which I show that, if we use the weights we derived in the previous post, a gradient ascender will reach the Harsanyi hyperplane H. This is a subproblem of the proof laid out in the first post of this sequence, and the main post describes why that problem is interesting.
The Gradient and Contour Lines
It's easy to find the points s∈Rn which have the same G score as p: they're the points which satisfy G(s,ψ)=G(p,ψ). They all lie on a skewed hyperbola that touches P at p.
Check out an interactive version here
One way to think about G is as a hypersurface in n+1-dimensional space sitting "above" the n-dimensional space of utilities we've been working with. When there are 2 agents, we can plot G using the third vertical axis.
Interactive version here
Check out the intersection of G and the vertical plane above the Harsanyi line H: this tells us about the values of G along this line, and as we shift p we can recalculate ψ so that among H, G peaks at p.
Our choice of p determines where we land on that surface "above" p. If we take a slice through G by only looking at the points of G at the same "altitude" as p, we get exactly that hyperbola back!
Doing this for many altitudes gives us a contour map, which you're probably familiar with in the context of displaying the altitude of real 3D landscapes on flat 2D maps.
You can see how these contours change as we change p and H using the interactive version here.
There's a theorem which tells us that the gradient of G must either be 0 or perpendicular to these contour hypersurfaces. So by calculating the gradient, we can calculate the tangent hyperplane of our skewed hyperbolas! And then we'll see if anything interesting happens at p.
This is a subproblem of the gradient calculation we did earlier, where we were specifically interested in how G changes along the Harsanyi hyperplane H. The slope of H, encoded in ϕ(p,F), showed up in how we |
73d94cf7-9762-4a38-9cad-0316bf27d994 | trentmkelly/LessWrong-43k | LessWrong | The Virus - Short Story
There are a number of ways that our AI and technological development can turn out. This is a hypothetical story about a way where AI development is stopped and a 'pivotal act' that does not require AGI occurs.
The first thing I notice when I wake up is that the sun is shining through the window. "Oh sh--" I bite off a curse. I've probably missed my class.
I reach for my phone, and suddenly, I'm startled by how hot it is. Searingly hot. I tentatively tap the screen, but it doesn't turn on. "Did the battery explode?" I mutter. A sinking feeling washes over me. This is a terrible start to the day.
I don't know it yet, but waking up late is soon going to be the least of my worries.
I run downstairs, grabbing my backpack. I walk over the where my laptop is charging only to realize in horror that smoke is trickling from behind the screen.
"What the hell?"
This is really bad. Maybe there was a huge electrical surge? Both my phone and the laptop are fried.
I open the door, jog down the stairs, and see dozens of students anxiously gathered outside.
"What's happening?" I ask, jogging up to the group. All of them look scared.
"Something bad is going on." One of them gestures towards smoke in the distance. "All of the computers are fried, both in the college and the town. Nobody has any idea what's going on. None of the phones work."
A chill settles over me. "Is it a cyberattack?" I ask. I've never heard of anything like this. I've heard that detonating a nuke in the atmosphere could knock out an electrical grid, but I feel like I would have heard it.
I look up at the sky. It's a crisp morning, probably about 10:00 AM or so.
There is not a single plane in the sky. Not even the faint, ever-present rumbling of jet engines at forty-five thousand feet. The day suddenly seems eerily quiet.
I listen harder. I don't hear any cars either.
It is then that I begin to realize the magnitude of what is happening.
----------------------------------------
The next week happens |
4f498aa1-5bb6-4bee-97bf-d36d1297d5fd | trentmkelly/LessWrong-43k | LessWrong | Spaghetti Towers
Here’s a pattern I’d like to be able to talk about. It might be known under a certain name somewhere, but if it is, I don’t know it. I call it a Spaghetti Tower. It shows up in large complex systems that are built haphazardly.
Someone or something builds the first Part A.
Later, someone wants to put a second Part B on top of Part A, either out of convenience (a common function, just somewhere to put it) or as a refinement to Part A.
Now, suppose you want to tweak Part A. If you do that, you might break Part B, since it interacts with bits of Part A. So you might instead build Part C on top of the previous ones.
And by the time your system looks like this, it’s much harder to tell what changes you can make to an earlier part without crashing some component, so you’re basically relegated to throwing another part on top of the pile.
I call these spaghetti towers for two reasons: One, because they tend to quickly take on circuitous knotty tangled structures, like what programmers call “spaghetti code”. (Part of the problem with spaghetti code is that it can lead to spaghetti towers.)
Especially since they’re usually interwoven in multiple dimensions, and thus look more like this:
“Can you just straighten out the yellow one without touching any of the others? Thanks.”
Second, because shortsightedness in the design process is a crucial part of spaghetti machines. In order to design a spaghetti system, you throw spaghetti against a wall and see if it sticks. Then, when you want to add another part, you throw more spaghetti until it sticks to that spaghetti. And later, you throw more spaghetti. So it goes. And if you decide that you want to tweak the bottom layer to make it a little more useful – which you might want to do because, say, it was built out of spaghetti – without damaging the next layers of gummy partially-dried spaghetti, well then, good luck.
Note that all systems have load-bearing, structural pieces. This does not make them spagh |
47d9408a-3de7-43e1-a62c-4c1a274abe28 | trentmkelly/LessWrong-43k | LessWrong | A Better Time until Sunburn Calculator
This is a preliminary review (more on what this means).
I became curious about sun exposure recently. I did an abbreviated review of some of the relevant literature and came away thinking that whether sun exposure is beneficial or not was unclear at that level of investigation (this seems to be an open question for most LWers as well).
One consensus that did seem to exist was that sunburn is bad and a thing to avoid. In order to be able to avoid it, and especially to be able to avoid it while still getting the potential benefits of non-burning sun exposure, I figured it would be helpful to know how long I could be in the sun without getting burnt.
This could be determined by trial and error, and maybe that would be a valid approach if I felt a good sample size could be obtained in an enjoyable, low-risk way. I don’t like sunburn though (and it probably wouldn’t be low-risk), so I was personally interested in finding a calculator that would predict this.
If you google something like “sunburn calculator” you’ll get a lot of results. But they are incredibly bad. Most seem highly untrustworthy, many clearly have bad inaccuracies, some don’t gather sufficient information for meaningful calculation, and ~none are transparent about their methodology or how to interpret their results.
So I’ve built a better, bad, sunburn calculator (view, copy).
It’s better than others because:
1. At least some of the results have high interpretability. It can calculate how many minutes it takes for 1% of people in this situation to develop a sunburn, and what percent of people in these conditions will get a sunburn after the indicated number of minutes.
2. There’s transparency. Sources are cited (in notes), calculations are available on-sheet (formulas & hidden rows), and I’ve added relevant notes and context where appropriate to help with interpretability.
3. Relatively feature-rich. It accounts for sunscreen protection typically not working as advertised, and includes some |
1c695627-ec12-47af-b818-cd633da6e9f0 | trentmkelly/LessWrong-43k | LessWrong | Less Competition, More Meritocracy?
Analysis of the paper: Less Competition, More Meritocracy (hat tip: Marginal Revolution: Can Less Competition Mean More Meritocracy?)
Epistemic Status: Consider the horse as if it was not a three meter sphere
Economic papers that use math to prove things can point to interesting potential results and reasons to question one’s intuitions. What is frustrating is the failure to think outside of those models and proofs, analyzing the practical implications.
In this particular paper, the central idea is that when risk is unlimited and free, ratcheting up competition dramatically increases risk taken. This introduces sufficient noise that adding more competitors can make the average winner less skilled. At the margin, adding additional similar competitors to a very large pool has zero impact. Adding competitors with less expected promise makes things worse.
This can apply in the real world. The paper provides a good example of a very good insight that is then proven ‘too much,’ and which does not then question or vary its assumptions in the ways I would find most interesting.
I. THE BASIC MODEL AND ITS CENTRAL POINT
Presume some number of job openings. There are weak candidates and strong candidates. Each candidate knows if they are strong or weak, but not how many other candidates are strong, nor do those running the contest know how many are strong.
The goal of the competition is to select as many strong candidates as possible. Or formally, to maximize [number of strong selected – number of weak selected], which is the same thing if the number of candidates is fixed, but is importantly different later when the number of selected candidates can vary. Each candidate performs and is given a score, and for an N-slot competition, the highest N scores are picked.
By default, strong candidates score X and weak candidates score Y, X>Y, but each candidate can also take on as much risk as they wish, with any desired distribution of scores, so long as their score never |
16c28f00-0274-4ac7-8a62-b476386d9048 | trentmkelly/LessWrong-43k | LessWrong | SMBC comic: poorly programmed average-utility-maximizing AI
I laughed: SMBC comic. |
55c2c0eb-7b7d-4518-b2b3-1112950ce0fd | trentmkelly/LessWrong-43k | LessWrong | Scientists make monkeys smarter using brain implants [link]
Article at io9. The paper is available here.
The researchers showed monkeys specific images and then trained them to select those images out of a larger set after a time delay. They recorded the monkeys' brain function to determine which signals were important. The experiment tests the monkey's performance on this task in different cases, as described by io9:
> Once they were satisfied that the correct mapping had been done, they administered cocaine to the monkeys to impair their performance on the match-to-sample task (seems like a rather severe drug to administer, but there you have it). Immediately, the monkeys' performance fell by a factor of 20%.
>
> It was at this point that the researchers engaged the neural device. Specifically, they deployed a "multi-input multi-output nonlinear" (MIMO) model to stimulate the neurons that the monkeys needed to complete the task. The inputs of this device monitored such things as blood flow, temperature, and the electrical activity of other neurons, while the outputs triggered the individual neurons required for decision making. Taken together, the i/o model was able to predict the output of the cortical neurons — and in turn deliver electrical stimulation to the right neurons at the right time.
>
> And incredibly, it worked. The researchers successfully restored the monkeys' decision-making skills even though they were still dealing with the effects of the cocaine. Moreover, when duplicating the experiment under normal conditions, the monkeys' performance improved beyond the 75% proficiency level shown earlier. In other words, a kind of cognitive enhancement had happened.
This research is a remarkable followup to research that was done in rodents last year. |
9f03d9c9-decb-4db4-b2a6-d0973581b2d5 | trentmkelly/LessWrong-43k | LessWrong | Underlying model of an imperfect morphism
We've already seen that if M0=(F0,Q0) and M1=(F1,Q1) are generalised models, with the relation r⊂W0×W1 a Q-preserving morphism between them, then there is an underlying model Mr=(F0⊔F1,Qr) between them.
Since r⊂W0×W1, Qr is defined on r; indeed, it is non-zero on r only. The underlying model has functions r0 and r1 to M0 and M1, which push forward Qr in a unique way - to Q0 and Q1 respectively. Essentially:
* There is an underlying reality Mr of which M0 and M1 are different, consistent, facets.
Illustrated, for gas laws:
Underlying model of imperfect morphisms
But we've seen that relations r need not be Q-preserving; there are weaker conditions that also form categories.
Indeed, even in the toy example above, the ideal gas laws and the "atoms bouncing around" model don't have a Q-preserving morphism between them. The atoms bouncing model is more accurate, and the idea gas laws are just an approximation of these (for example, they ignore molar mass).
Let's make the much weaker assumption that r is Q-birelational - essentially that if any wi has non-zero Qi-measure (i.e. Qi(wi)>0), then r relates it to at least one other wj which also has non-zero Qj-measure. Equivalently, if we ignore all elements with zero Qi-measure, then r and r−1 are surjective relations between what's left. Then we have a more general underlying morphism result:
Statement of the theorem
Let r be a Q-birelational morphism between M0=(F0,Q0) and M1=(F1,Q1), and pick any 0≤α≤1.
Then there exists a generalised model Mαr=(F0⊔F1,Qαr), with Qαr=0 off of r⊂W0×W1 (this Qαr is not necessarily uniquely defined). This has natural functional morphisms r0:Mαr→M0 and r1:Mαr→M1.
Those ri push forward Qαr to Mi, such that for the distance metric L defined on morphisms,
1. |r0(Qαr)−Q0|1=αL(r),
2. |r1(Qαr)−Q1|1=(1−α)L(r).
By the definition of L, this is the minimum |r0(Qαr)−Q0|1+|r1(Qαr)−Q1|1 we can get. The proof is in this footnote[1].
Accuracy of models
If α=0, we're saying that M0 is a cor |
6f99858e-4157-43fe-9022-edbe52aaef96 | trentmkelly/LessWrong-43k | LessWrong | How to use "philosophical majoritarianism"
The majority of people would hold more accurate beliefs if they simply believed the majority. To state this in a way that doesn't risk information cascades, we're talking about averaging impressions and coming up with the same belief.
To the degree that you come up with different averages of the impressions, you acknowledge that your belief was just your impression of the average, and you average those metaimpressions and get closer to belief convergence. You can repeat this until you get bored, but if you're doing it right, your beliefs should get closer and closer to agreement, and you shouldn't be able to predict who is going to fall on which side.
Of course, most of us are atypical cases, and as good rationalists, we need to update on this information. Even if our impressions were (on average) no better than the average, there are certain cases where we know that the majority is wrong. If we're going to selectively apply majoritarianism, we need to figure out the rules for when to apply it, to whom, and how the weighting works.
This much I think has been said again and again. I'm gonna attempt to describe how.
Imagine for a moment that you are a perfectly rational Bayesian, and you just need data.
First realize that "duplicate people" don't count double. If you make a maximum precision copy of someone, that doesn't make him any more likely to be right- clearly we can do better than averaging over all people with equal weighting. By the same idea, finding out that a certain train of thought leading to a certain belief is common shouldn't make you proportionally more confident in that idea. The only reason it might make you any more confident in it is the possibility that its truth leads to its proliferation and therefore its popularity is (weak) evidence.
This explains why we can dismiss the beliefs of the billions of theists. First of all, their beliefs are very well correlated so that all useful information can be learned through only a handful of theis |
e632bb5c-cf0d-41af-9272-f0a20b6009a6 | trentmkelly/LessWrong-43k | LessWrong | The rationalist's checklist
Doctor Peter Pronovost has managed to single-handedly reduce the infection rates in ICU facilities nationwide from numbers like fourteen percent or twenty percent to zero. His solution is idiotically simple: a checklist. In a process as complex as ICU treatment, doctors perform chained simple steps very many times, and it can be easy to forget a step. These things add up. Read the article before continuing.
In their phenomenal book, The Power of Full Engagement, Jim Loehr and Tony Schwartz discuss a pattern they have discovered among all top performers, ranging from sports to music and business. Beyond a certain level, all top performers had established positive rituals for relaxation and deliberate practice. These positive rituals were daily ingrained habits and allowed them to surpass the merely excellent performers.
It is difficult to make use of most Less Wrong posts in terms of changing one's behavior. Even if you integrate a lesson fully, you will still miss steps on occasion. I propose we suggest checklists for various recurring activities that will offload these responsibilities from conscious thought to its much more reliable brother, ingrained habit. Fortunately, we should not need many checklists.
An example of a checklist is a morning routine:
* Short exercise (40 pushups, 50 situps)
* Shower
* Daily cosmetics (brush teeth, shave, skin moisturizer, hair forming cream, male scented lotion, deodorant)
* Make breakfast, which can be depending on what day it is mod 3: Black mango tea as well as
* Omelette with cheese and tomatoes
* Cereal with a side of oatmeal
* A piece of fruit and yoghurt
* Brief non-work and non-study related reading, for example, a novel
Another example of a checklist is during a conversation with a non-rationalist on the 5-second level: If you feel strong affect (anger or annoyance) at someone's point in a debate,
* Say "Let me think about this for a second."
* Are you two on the same inferential level? If no |
db32ff36-f10c-4138-9651-96c8a2b1d1c3 | StampyAI/alignment-research-dataset/special_docs | Other | Unifying Logic and Probability: A New Dawn for AI?
Unifying Logic and Probability: A New Dawn
for AI?
Stuart Russell
1University of California, Berkeley CA 94720, USA
russell@cs.berkeley.edu
http://www.cs.berkeley.edu/~russell
2Universit´ e Pierre et Marie Curie, Laboratoire d’informatique de Paris 6,
4 place Jussieu, 75252 Paris Cedex 05, France
Abstract. Logic and probability theory are two of the most important
branches of mathematics and each has played a significant role in ar-
tificial intelligence (AI) research. Beginning with Leibniz, scholars have
attempted to unify logic and probability. For “classical” AI,based largelyon first-order logic, the purpose of such a unification is to handle uncer-
tainty and facilitate learning from real data; for “modern” AI, based
largely on probability theory, the purpose is to acquire formal languageswith sufficient expressive power to handle complex domains and incor-
porate prior knowledge. This paper provides a brief summary of an in-
vited talk describing efforts in these directions, focusing in particular onopen-universe probability models that allow for uncertainty about the
existence and identity of objects.
Keywords: first-order logic, probability, probabilistic programming,
Bayesian logic, machine learning.
1 Introduction
From its earliest days, AI adopted the idea of declarative system reasoning over
explicitly represented knowledge with a general inference engine. Such systems
require a formal language to express knowledge about the real world; and the
real world has things in it . For this reason, in 1958, McCarthy [16] proposed
first-order logic—the mathematics of objects and relations—as the foundation
for what we now call “classical AI.”
The key benefit of first-order logic is its expressive power, which leads to
concise—and hence easily learnable—models. For example, the rules of chess
occupy 100pages in first-order logic, 105pages in propositional logic, and 1038
pages in the language of finite automata. The power comes from separatingpredicates from their arguments and qua ntifying over those arguments: so one
can write rules about On(p, c, x, y, t ) (piece pof color cis on square x, yat move
t) without having to fill in each specific value for c,p,x,y,a n d t.
Asecondresearchtradition,sometimescalled“modernAI,”developedaround
another important property of the real world: pervasive uncertainty about both
A. Laurent et al. (Eds.): IPMU 2014, Part I, CCIS 442, pp. 10–14, 2014.
c/circlecopyrtSpringer International Publishing Switzerland 2014
Unifying Logic and Probability 11
its state and its dynamics. Modern AI is based on probability theory, which pro-
vides principled methods for learning and making predictions from observations.The key advance underlying modern AI was the development of Bayesian net-
works[22] and the related family of undirected graphical models [6]. Bayes nets
provideda formal languagefor probabilitymodels and enabled rapid advances inmachine learning, vision, natural language understanding, and knowledge-based
systems. The expressive power of Bayes nets is, however, limited. They assume
afi x e ds e to f variables , each of which can take on a value from a fixed range;
thus, they are a propositional formalism, like Boolean circuits. The rules of chess
and of many other domains are beyond them.
What happened next, of course, is that classical AI researchers noticed the
pervasiveuncertainty,while modern AI re searchersnoticed, orremembered, that
the world has things in it. Both traditions arrived at the same place: the world
is uncertain and it has things in it .T od e a lw i t ht h i s ,w eh a v et o unify logic and
probability .
But how? Even the meaning of such a goal is unclear. Early attempts by
Leibniz, Bernoulli, De Morgan, Boole, Peirce, Keynes, Carnap, and Gaifman
(surveyed in [8,10]) involved attaching probabilities to logical sentences. This
line of work influenced AI research [9,3,14] but has serious shortcomings as a
vehicle for representing knowledge. An alternative approach, arising from both
branches of AI and from statistics, draws on the compositional semantics ofBayesnets. Some toolsuse programmingconstructs to build verylargeBeys nets
with repeated structure [7,4,15], while others adopt the syntactic and semantic
devices of logic (composable function symbols, logical variables, quantifiers) tocreate declarative, first-order probabilistic languages [5,23,25,12,11].
Despite their successes, these approaches miss an important consequence of
uncertainty in a world of things: there will be uncertainty about what things are
in the world . Real objects seldom wear unique identifiers or preannounce their
existence like the cast of a play. In the cas e of vision, for example, the existence
of objects must be inferredfrom raw data (pixels) that contain no explicit object
references at all. If, however, one has a probabilistic model of the ways in which
worlds can be composed of objects and of how objects cause pixel values, theninference canproposethe existence ofobjectsgiven onlypixel values asevidence.
Similar arguments apply to areas such as natural language understanding, web
mining, and computer security.
The difference between knowing all the objects in advance and inferring their
existence and identity from observation corresponds to an important but oftenoverlooked distinction between closed-universe languages such as SQL and logic
programs and open-universe languages such as full first-order logic.
This distinction is best understood in terms of the possible worlds under each
type of semantics. Figure 1(a) shows a simple example with two constants and
one binary predicate. Notice that first-order logic is an open-universe language:
eventhoughtherearetwo constantsymbols,the possibleworldsallowfor 1,2,or
indeed arbitrarily many objects. A closed-universe language enforces additional
assumptions that restrict the set of possible worlds:
12 S. Russell
. . .AB
A
BAB
A
BAB
A
BAB
A
BAB
A
B(b)AB AB AB AB AB AB
. . . . . . . . . (a)
Fig. 1.(a) Some of the first-order possible worlds for a language with two constant
symbols, Aand B, and one binary predicate. Arrows indicate the interpretation of
each constant symbol and the relations between objects. (b) The analogous figure
under closed-universe semantics.
–T h e unique names assumption requires that dist inct terms must refer to dis-
tinct objects.
–T h e domain closure assumption requires that there are no objects other than
those named by terms.
These two assumptions mean that every possible world contains the same
objects, which are in one-to-one correspondence with the ground terms of the
language (see Figure 1(b)).1
A formal probability model must specify the probability of every possible
world given the vocabulary (predicates, functions, constants)of the model’s syn-
tactic representation.Obviously,the set ofworldsunder open-universesemantics
islargerandmoreheterogeneous,whichmakesthe taskofdefiningopen-universe
probability models more challenging. The core part of the talk is concerned
with a first-order, open-universe probabilistic language called Bayesian logic or
BLOG [18,19]. BLOG was developed primarily as the PhD thesis research of
Brian Milch [17]. The key results derived for BLOG are the following:
–Every well-formed BLOG model specifie s a well-defined probability distribu-
tion over the possible worlds constructed from the vocabulary of the model.
–There exist Monte Carlo algorithms that provably converge(subject to tech-
nical conditions on the conditional distributions of the model) to the correct
posterior probability for any first-order query for any well-formed BLOG
model [20,1].
1The difference between open and closed universes can also be illustrated
with a common-sense example. Suppose a system knows just two sentences,
Fa t h e r(W illiam )=Billand Fa t h e r(Junior)=Bill. How many children does Bill
have? Under closed-universe semantics—e.g., in a database system—he has exactly
2; under open-universe semantics, between 1 and ∞.
Unifying Logic and Probability 13
The generic algorithms (importance sampling and MCMC applied to a dynam-
ically constructed ground representation) are often too slow for practical useon large models. Several avenues are b eing pursued for speeding up inference,
including special-purpose block samplers for variables constrained by determin-
istic relationships [13], static analysis to identify submodels amenable to efficientinference, lifted inference to avoid grounding by manipulating symbolic distri-
butions over large sets of objects [24,2 6], and compiler techniques to generate
model-specific inference code.
More than two dozen BLOG models have been developed, covering a wide
variety of standard machine learning models as well as applications includingcitation matching [21] and global seismic monitoring for the Comprehensive Nu-
clear Test-Ban Treaty [2].
2P r o s p e c t s
These are very early days in the process of unifying logic and probability. We
need much more experience in developing models for a wide range of applica-tions. Undoubtedly there are new modeling idioms, programming constructs,
and inference algorithms to discover.
The development of Bayes nets in the late 1980s connected machine learning
tostatisticsandreconnected(modern)AIwithvisionandlanguage.Itispossible
that first-order probabilistic languages, which have both Bayes nets and first-
order logic as special cases, can serve a similar, but more inclusive, unifying
role.
Acknowledgments. The author is supported by the Chaire Blaise Pascal ,
funded by the l’ ´Etat et la R´ egionˆIle de France and administered by the Fonda-
tion de l’ ´Ecole Normale Sup´ erieure. The research i s also supported by DARPA
contractFA8750-14-C-0011for the APPRIL project under the PPAML program
andpreviouslyby DARPAcontractFA8650-11-1-7153for the OUTBIDS project
under the MSEE program.
References
1. Arora, N., Russell, S., de Salvo Braz, R., Sudderth, E.: Gibbs sampling in open-
universe stochastic languages. In: UAI 2010 (2010)
2. Arora, N.S., Russell, S., Sudderth, E.: NET-VISA: Network processing vertically
integrated seismic analysis. Bull. Seism. Soc. Amer. 103 (2013)
3. Bacchus, F.: Representingand Reasoning with Probabilistic Knowledge. MIT Press
(1990)
4. Bessi`ere, P., Mazer, E., Ahuactzin, J.M., Mekhnacha, K.: Bayesian programming.
CRC (2013)
5. Breese, J.S.: Construction of belief and decision networks. Computational
Intelligence 8, 624–647 (1992)
6. Darroch, J.N., Lauritzen, S.L., Speed, T.P.: Markov fields and log-linear interaction
models for contingency tables. The Annals of Statistics 8(3), 522–539 (1980)
14 S. Russell
7. Gilks, W.R., Thomas, A., Spiegelhalter, D.J.: A language and program for complex
Bayesian modelling. The Statistician 43, 169–178 (1994)
8. Hailperin, T.: Probability logic. Notre Dame J. Formal Logic 25(3), 198–212 (1984)9. Halpern, J.Y.: An analysis of first-order logics of probability. AIJ 46(3), 311–350
(1990)
10. Howson, C.: Probability and logic. J. Applied Logic 1(3-4), 151–165 (2003)11. Kersting, K., De Raedt, L.: Bayesian logic programs. In: ILP 2000 (2000)
12. Koller, D., Pfeffer, A.: Probabilistic frame-based systems. In: AAAI1998 (1998)
13. Li, L., Ramsundar, B., Russell, S.: Dynamic scaled sampling for deterministic
constraints. In: AI/Stats 2013 (2013)
14. Lukasiewicz, T.: Probabilistic logic programming. In: ECAI (1998)
15. McCallum, A., Schultz, K., Singh, S.: FACTORIE: Probabilistic programming via
imperatively defined factor graphs. NIPS 22 (2010)
16. McCarthy, J.: Programs with common sense. In: Proc. Symposium on Mechanisa-
tion of Thought Processes. Her Majesty’s Stationery Office (1958)
17. Milch, B.: Probabilistic Models with Unknown Objects. Ph.D. thesis, UC Berkeley
(2006)
18. Milch, B., Marthi, B., Sontag, D., Russell, S.J., Ong, D., Kolobov, A.: BLOG:
Probabilistic models with unknown objects. In: IJCAI 2005 (2005)
19. Milch, B., Russell, S.: Extending Bayesian networks to the open-universe case. In:
Dechter, R., Geffner, H., Halpern, J. (eds.) Heuristics, Probability and Causality:A Tribute to Judea Pearl. College Publications (2010)
20. Milch, B., Russell, S.J.: General-purpose MCMC inference over relational struc-
tures. In: UAI 2006 (2006)
21. Pasula, H., Marthi, B., Milch, B., Russell, S.J., Shpitser, I.: Identity uncertainty
and citation matching. NIPS 15 (2003)
22. Pearl, J.: Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann (1988)23. Poole, D.: Probabilistic Horn abduction and Bayesian networks. AIJ 64, 81–129
(1993)
24. Poole, D.: First-order probabilistic inference. In: IJCAI 2003 (2003)25. Sato, T., Kameya, Y.: PRISM: A symbolic statistical modeling language. In: IJCAI
1997 (1997)
26. Van den Broeck, G.: Lifted Inference and Learning in Statistical Relational Models.
Ph.D. thesis, Katholieke Universiteit Leuven (2013) |
90ba7c1f-c1d4-424d-bbd9-f1e1f329161f | trentmkelly/LessWrong-43k | LessWrong | There Is No Control System For COVID
Introduction:
The standard model for explaining COVID transmission has a serious problem with the data. In the United States, despite seemingly large differences in policy and behavior, the difference in infection rates has been relatively small.
In this article I will explain why this convergence is so surprising for the standard model and how a relatively small modification can give much more sensible results. I will also discuss some of the other suggested approaches and why they are not able to adequately solve this problem.
The Problem:
If we look at the infection data as of November 2021, before California and New York had their winter surge, we see widely varying COVID infection rates. Vermont, the least infected state, had less than 3% of its population infected. The most infected state (excluding ones with major outbreaks before lockdown) was North Dakota with almost 28% of its population infected.
These large differences were attributed to different levels of restrictions and seriousness among the residents in fighting COVID. California was not the least infected state, and Florida not the most infected, but as large states with deviating policy responses and corresponding differences in infection rates they came to epitomize the two sides.
Florida, with its loose policies had people spend ~20% more time outside the house than California did, and by November 1st 20% of the population had been infected compared to California's 12%.
The implication then is that if Florida were to reduce its transmission by the 20% necessary to match California it would reduce its infection rate to 12%. However if we try to fit the daily transmission rate to the observed data in Florida, we see that reducing it by 20% would mean that only 0.75% would have been infected. In order to match California's infection rate Florida would have needed to reduce its transmission rate by just 3%
Using similar simulations we can see that the difference needed for Florida to match th |
b158cfba-7f59-4695-a1ac-24581b602bc0 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [AN #70]: Agents that help humans who are still learning about their own preferences
Find all Alignment Newsletter resources [here](http://rohinshah.com/alignment-newsletter/). In particular, you can [sign up](http://eepurl.com/dqMSZj), or look through this [spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing) of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email.
Audio version [here](http://alignment-newsletter.libsyn.com/alignment-newsletter-70) (may not be up yet).
**Highlights**
--------------
[The Assistive Multi-Armed Bandit](https://arxiv.org/abs/1901.08654) *(Lawrence Chan et al)* (summarized by Asya): Standard approaches for inverse reinforcement learning assume that humans are acting optimally according to their preferences, rather than learning about their preferences as time goes on. This paper tries to model the latter by introducing the *assistive multi-armed bandit* problem.
In the standard *multi-armed bandit* problem, a player repeatedly chooses one of several “arms” to pull, where each arm provides reward according to some unknown distribution. Imagine getting 1000 free plays on your choice of 10 different, unknown slot machines. This is a hard problem since the player must trade off between exploration (learning about some arm) and exploitation (pulling the best arm so far). In *assistive multi-armed bandit*, a robot is given the opportunity to intercept the player every round and pull an arm of its choice. If it does not intercept, it can see the arm pulled by the player but not the reward the player receives. This formalizes the notion of an AI with only partial information trying to help a learning agent optimize their reward.
The paper does some theoretical analysis of this problem as well as an experimental set-up involving a neural network and players acting according to a variety of different policies. It makes several observations about the problem:
- A player better at learning does not necessarily lead to the player-robot team performing better-- the robot can help a suboptimal player do better in accordance with how much information the player's arm pulls convey about the reward of the arm.
- A robot is best at assisting when it has the right model for how the player is learning.
- A robot that models the player as learning generally does better than a robot that does not, even if the robot has the wrong model for the player's learning.
- The problem is very sensitive to which learning model the player uses and which learning model the robot assumes. Some player learning models can only be effectively assisted when they are correctly modeled. Some robot-assumed learning models effectively assist for a variety of actual player learning models.
**Asya's opinion:** The standard inverse reinforcement learning assumption about humans acting optimally seems unrealistic; I think this paper provides an insightful initial step in not having that assumption and models the non-optimal version of the problem in a clean and compelling way. I think it's a noteworthy observation that this problem is very sensitive to the player's learning model, and I agree with the paper that this suggests that we should put effort into researching actual human learning strategies. I am unsure how to think about the insights here generalizing to other inverse reinforcement learning cases.
**Technical AI alignment**
==========================
### **Problems**
[Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence](https://www.mdpi.com/2504-2289/3/2/21/htm) *(David Manheim)* (summarized by Flo): While [Categorizing Variants of Goodhart’s Law](https://arxiv.org/abs/1803.04585) explains failure modes that occur when a single agent’s proxy becomes decoupled from the true goal, this paper aims to characterize failures involving multiple agents:
**Accidental steering** happens when the combined actions of multiple agents facilitate single-agent failures. For example, catching more fish now is usually positively correlated with a fisherman's long term goals, but this relationship inverts once there are lots of fishermen optimizing for short term gains and the fish population collapses.
**Coordination Failure** occurs when agents with mutually compatible goals fail to coordinate. For example, due to incomplete models of other agent's goals and capabilities, two agents sharing a goal might compete for a resource even though one of them is strictly better at converting the resource into progress towards their goal.
**Adversarial optimization** is when an agent **O** steers the world into states where **V**'s proxy goal is positively correlated with **O**'s goal. For example, one could exploit investors who use short term volatility as a proxy for risk by selling them instruments that are not very volatile but still risky.
**Input Spoofing** is the act of one agent manipulating another learning agent's model, either by manufacturing false evidence or by filtering the received evidence systematically, as arguably happened with [Microsoft’s Tay](https://en.wikipedia.org/wiki/Tay_(bot)).
Finally, **Goal co-option** happens when agent **O** has (partial) control over the hardware agent **V** runs or relies on. This way, **O** can either modify the reward signal **V** receives to change what **V** optimizes for, or it can directly change **V**'s outputs.
The difficulties in precisely modelling other sophisticated agents and other concerns related to embedded agency make it hard to completely avoid these failure modes with current methods. Slowing down the deployment of AI systems and focussing on the mitigation of the discussed failure modes might prevent limited near term catastrophes, which in turn might cause a slowdown of further deployment and prioritization of safety.
**Flo's opinion:** I like that this paper subdivides failure modes that can happen in multiparty optimization into several clear categories and provides various models and examples for each of them. I am unsure about the conclusion: on one hand, slowing down deployment to improve the safety of contemporary systems seems very sensible. On the other hand, it seems like there would be some failures of limited scope that are hard to reproduce "in the lab". Widely deployed AI systems might provide us with valuable empirical data about these failures and improve our understanding of the failure modes in general. I guess ideally there would be differential deployment with rapid deployment in noncritical areas like managing local parking lots, but very slow deployment for critical infrastructure.
**Rohin's opinion:** I'm particularly interested in an analysis of how these kinds of failures affect existential risk. I'm not sure if David believes they are relevant for x-risk, but even if so the arguments aren't presented in this paper.
### **Mesa optimization**
[Relaxed adversarial training for inner alignment](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment) *(Evan Hubinger)* (summarized by Matthew): Previously, Paul Christiano [proposed](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d) creating an adversary to search for inputs that would make a powerful model behave "unacceptably" and then penalizing the model accordingly. To make the adversary's job easier, Paul relaxed the problem so that it only needed to find a pseudo-input, which can be thought of as predicate that constrains possible inputs. This post expands on Paul's proposal by first defining a formal unacceptability penalty and then analyzing a number of scenarios in light of this framework. The penalty relies on the idea of an amplified model inspecting the unamplified version of itself. For this procedure to work, amplified overseers must be able to correctly deduce whether potential inputs will yield unacceptable behavior in their unamplified selves, which seems plausible since it should know everything the unamplified version does. The post concludes by arguing that progress in model transparency is key to these acceptability guarantees. In particular, Evan emphasizes the need to decompose models into the parts involved in their internal optimization processes, such as their world models, optimization procedures, and objectives.
**Matthew's opinion:** I agree that transparency is an important condition for the adversary, since it would be hard to search for catastrophe-inducing inputs without details of how the model operated. I'm less certain that this particular decomposition of machine learning models is necessary. More generally, I am excited to see how adversarial training can help with [inner alignment](https://www.alignmentforum.org/posts/pL56xPoniLvtMDQ4J/the-inner-alignment-problem).
### **Learning human intent**
[Learning from Observations Using a Single Video Demonstration and Human Feedback](http://arxiv.org/abs/1909.13392) *(Sunil Gandhi et al)* (summarized by Zach): Designing rewards can be a long and consuming process, even for experts. One common method to circumvent this problem is through demonstration. However, it might be difficult to record demonstrations in a standard representation, such as joint positions. **In this paper, the authors propose using human feedback to circumvent the discrepancy between how demonstrations are recorded (video) and the desired standard representation (joint positions).** First, humans provide similarity evaluations of short clips of an expert demonstration to the agent's attempt and a similarity function is learned by the agent. Second, this similarity function is used to help train a policy that can imitate the expert. Both functions are learned jointly. The algorithm can learn to make a Hopper agent back-flip both from a Hopper demonstration of a back-flip, and from a YouTube video of a human backflipping. Ultimately, the authors show that their method improves over another method that uses human feedback without direct comparison to desired behavior.
**Zach's opinion:** This paper seems like a natural extension of prior work. The imitation learning problem from observation is well-known and difficult. Introducing human feedback with a structured state space definitely seems like a viable way to get around a lot of the known difficulties with other methods such as a GAIL.
### **Handling groups of agents**
[Collaborating with Humans Requires Understanding Them](https://bair.berkeley.edu/blog/2019/10/21/coordination/) *(Micah Carroll et al)* (summarized by Rohin): *Note: I am second author on this paper.* Self-play agents (like those used to play [Dota](https://blog.openai.com/openai-five/) ([AN #13](https://mailchi.mp/8234356e4b7f/alignment-newsletter-13)) and [Starcraft](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/) ([AN #43](https://mailchi.mp/768a8130013f/alignment-newsletter-43))) are very good at coordinating with *themselves*, but not with other agents. They "expect" their partners to be similar to them; they are unable to predict what human partners would do. In competitive games, this is fine: if the human deviates from optimal play, even if you don't predict it you will still beat them. (Another way of saying this: the minimax theorem guarantees a minimum reward *regardless* of the opponent.) However, in cooperative settings, things are not so nice: a failure to anticipate your partner's plan can lead to arbitrarily bad outcomes. We demonstrate this with a simple environment that requires strong coordination based on the popular game Overcooked. We show that agents specifically trained to play alongside humans perform much better than self-play or population-based training when paired with humans, both in simulation and with a real user study.
**Rohin's opinion:** I wrote a short [blog post](https://www.alignmentforum.org/posts/dBMC63hjkc5wPqTC7/human-ai-collaboration) talking about the implications of the work. Briefly, there are three potential impacts. First, it seems generically useful to understand how to coordinate with an unknown agent. Second, it is specifically useful for scaling up [assistance games](https://arxiv.org/abs/1606.03137) ([AN #69](https://mailchi.mp/59ddebcb3b9a/an-69-stuart-russells-new-book-on-why-we-need-to-replace-the-standard-model-of-ai)), which are intractable to solve optimally. Finally, it can lead to more ML researchers focusing on solving problems with real humans, which may lead to us finding and solving other problems that will need to be solved in order to build aligned AI systems.
**Read more:** [Paper: On the Utility of Learning about Humans for Human-AI Coordination](https://arxiv.org/abs/1910.05789)
[Learning Existing Social Conventions via Observationally Augmented Self-Play](https://arxiv.org/abs/1806.10071) *(Adam Lerer and Alexander Peysakhovich)* (summarized by Rohin): This paper starts from the same key insight about self-play not working when it needs to generalize to out-of-distribution agents, but then does something different. They assume that the test-time agents are playing an **equilibrium policy**, that is, each agent plays a best response policy assuming all the other policies are fixed. They train their agent using a combination of imitation learning and self-play: the self-play gets them to learn an equilibrium behavior, while the imitation learning pushes them towards the equilibrium that the test-time agents use. They outperform both vanilla self-play and vanilla imitation learning.
**Rohin's opinion:** Humans don't play equilibrium policies, since they are often suboptimal. For example, in Overcooked, any equilibrium policy will zip around the layout, rarely waiting, which humans are not capable of doing. However, when you have a very limited dataset of human behavior, the bias provided by the assumption of an equilibrium policy probably does help the agent generalize better than a vanilla imitation learning model, and so this technique might do better when there is not much data.
### **Adversarial examples**
[Adversarial Policies: Attacking Deep Reinforcement Learning](https://arxiv.org/abs/1905.10615) *(Adam Gleave et al)* (summarized by Sudhanshu): This work demonstrates the existence of *adversarial policies* of behaviour in high-dimensional, two-player zero-sum games. Specifically, they show that adversarially-trained agents ("Adv"), who can only affect a victim's observations of their (Adv's) states, can act in ways that confuse the victim into behaving suboptimally.
An adversarial policy is trained by reinforcement learning in a single-player paradigm where the victim is a black-box fixed policy that was previously trained via self-play to be robust to adversarial attacks. As a result, the adversarial policies learn to push the observations of the victim outside the training distribution, causing the victim to behave poorly. The adversarial policies do not actually behave intelligently, such as blocking or tackling the victim, but instead do unusual things like spasming in a manner that appears random to humans, curling into a ball or kneeling.
Further experiments showed that if the victim's observations of the adversary were removed, then the adversary was unable to learn such an adversarial policy. In addition, the victim's network activations were very different when playing against an adversarial policy relative to playing against a random or lifeless opponent. By comparing two similar games where the key difference was the number of adversary dimensions being observed, they showed that such policies were easier to learn in higher-dimensional games.
**Sudhanshu's opinion:** This work points to an important question about optimisation in high dimension continuous spaces: without guarantees on achieving solution optimality, how do we design performant systems that are robust to (irrelevant) off-distribution observations? By generating demonstrations that current methods are insufficient, it can inspire future work across areas like active learning, continual learning, fall-back policies, and exploration.
I had a tiny nit-pick: while the discussion is excellent, the paper doesn't cover whether this phenomenon has been observed before with discrete observation/action spaces, and why/why not, which I feel would be an important aspect to draw out. In a finite environment, the victim policy might have actually covered every possible situation, and thus be robust to such attacks; for continuous spaces, it is not clear to me whether we can *always* find an adversarial attack.
In separate correspondence, author Adam Gleave notes that he considers these to be relatively low-dimensional -- even MNIST has way more dimensions -- so when comparing to regular adversarial examples work, it seems like multi-agent RL is harder to make robust than supervised learning.
**Read more:** [Adversarial Policies website](https://adversarialpolicies.github.io/)
**Other progress in AI**
========================
### **Reinforcement learning**
[Solving Rubik’s Cube with a Robot Hand](https://openai.com/blog/solving-rubiks-cube/) *(OpenAI)* (summarized by Asya): Historically, researchers have had limited success making general purpose robot hands. Now, OpenAI has successfully trained a pair of neural networks to solve a Rubik's cube with a human-like robot hand (the learned portion of the problem is manipulating the hand -- solving the Rubik's cube is specified via a classical algorithm). The hand is able to solve the Rubik's cube even under a variety of perturbations, including having some of its fingers tied together, or having its view of the cube partially occluded. The primary innovation presented is a new method called *Automatic Domain Randomization* (ADR). ADR automatically generates progressively more difficult environments to train on in simulation that are diverse enough to capture the physics of the real world. ADR performs better than existing domain randomization methods, which require manually specifying randomization ranges. The post speculates that ADR is actually leading to *emergent meta-learning*, where the network learns a learning algorithm that allows itself to rapidly adapt its behavior to its environment.
**Asya's opinion:** My impression is that this is a very impressive robotics result, largely because the problem of transferring training in simulation to real life ("sim2real") is extremly difficult. I also think it's quite novel if as the authors hypothesize, the system is exhibiting emergent meta-learning. It's worth noting that the hand is still not quite at human-level -- in the hardest configurations, it only succeeds 20% of the time, and for most experiments, the hand gets some of the state of the cube via Bluetooth sensors inside the cube, not just via vision.
**Read more:** [Vox: Watch this robot solve a Rubik’s Cube one-handed](https://www.vox.com/future-perfect/2019/10/15/20910007/robot-rubiks-cube-one-handed)
**News**
========
[FHI DPhil Scholarships](https://www.fhi.ox.ac.uk/scholarships/) (summarized by Rohin): The Future of Humanity Institute will be awarding up to two DPhil scholarships for the 2020/21 academic year, open to students beginning a DPhil at the University of Oxford whose research aims to answer crucial questions for improving the long-term prospects of humanity. Applications will open around January or February, and decisions will be made in April.
[Post-Doctoral Fellowship on Ethically Aligned Artificial Intelligence](https://mila.quebec/en/2019/06/call-for-application-post-doctoral-fellowship-on-ethically-aligned-artificial-intelligence/) (summarized by Rohin) (H/T Daniel Dewey): Mila is looking for a postdoctoral fellow starting in Fall 2020 who would work on ethically aligned learning machines, towards building machines which can achieve specific goals while acting in a way consistent with human values and social norms. Applications are already being processed, and will continue to be processed until the position is filled. |
05f7b2eb-5e96-4bde-8149-796c5d68a848 | trentmkelly/LessWrong-43k | LessWrong | Mistakes I’ve made part 3: Poor sacrificial accounting
There was a time when I routinely refused car travel, in favor of my more sustainable bicycle. Not always, but I had a high bar—if it was bucketing down with rain and I had no plastic pants, this was not sufficient excuse for instance. I would sometimes decline a ride even when others were purportedly driving somewhere anyway, to avoid encouraging them to be ‘driving anyway’ more often. I did enjoy cycling, on a good day, but many days weren’t good. People at school laughed at me on my bike, so sometimes I walked instead because I couldn’t stand them looking at me. Refusing cars was often a sacrifice.
My concern was the climate. My basic model of ‘sustainability’ was this: humanity has a bunch of resources, and when they run out we all die. If we use them slow enough, they can mostly last forever, but if we use them faster, we will die soon. Much like a bank account that earns lots of interest, or can be spent down in an afternoon. Space in the atmosphere for greenhouse gasses was a key resource. Driving was known to be unsustainable: it added carbon dioxide to the atmosphere faster than could be supported. This model has a few problems I think, but it is debatable, and this blog post is not about them. So lets suppose for now that I was right, and every car trip took humanity a notch closer to annihilation.
Given that driving was unsustainable, I wanted people to stop driving, so we wouldn’t all die. Naturally, it followed that I should not drive (I thought). If the cost of a billion people driving was too great for the benefits of a billion people driving, then the cost of one of those people driving was probably too great for the benefit of that person driving. Furthermore, I seemed a particularly easy person to convince, from my perspective. Furthermore, the benefit of me getting to school sooner and with dry pants was obviously unimaginably minuscule compared to any decrement in humanity’s ability to survive in the long term.
This is all very wrong. Why? Let |
6e001da7-ee40-4029-824c-1e91eb376e8c | trentmkelly/LessWrong-43k | LessWrong | Thoughts on designing policies for oneself
Note: This was originally written in relation to this rather scary comment of lukeprog's on value drift. I'm now less certain that operant conditioning is a significant cause of value drift (leaning towards near/far type explanations), but I decided to share my thoughts on the topic of policy design anyway.
----------------------------------------
Several years ago, I had a reddit problem. I'd check reddit instead of working on important stuff. The more I browsed the site, the shorter my attention span got. The shorter my attention span got, the harder it was for me to find things that were enjoyable to read. Instead of being rejuvenating, I found reddit to be addictive, unsatisfying, and frustrating. Every time I thought to myself that I really should stop, there was always just one more thing to click on.
So I installed LeechBlock and blocked reddit at all hours. That worked really well... for a while.
Occasionally I wanted to dig up something I remembered seeing on reddit. (This wasn't always bad--in some cases I was looking up something related to stuff I was working on.) I tried a few different policies for dealing with this. All of them basically amounted to inconveniencing myself in some way or another whenever I wanted to dig something up.
After a few weeks, I no longer felt the urge to check reddit compulsively. And after a few months, I hardly even remembered what it was like to be an addict.
However, my inconvenience barriers were still present, and they were, well, inconvenient. It really was pretty annoying to make an entry in my notebook describing what I was visiting for and start up a different browser just to check something. I figured I could always turn LeechBlock on again if necessary, so I removed my self-imposed barriers. And slid back in to addiction.
After a while, I got sick of being addicted again and decided to do something about it (again). Interestingly, I forgot my earlier thought that I could just turn LeechBlock |
4eaa61ba-dc5d-40bc-8bf0-ce210d3370e8 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Finding gliders in the game of life
[ARC’s current approach to ELK](https://ai-alignment.com/mechanistic-anomaly-detection-and-elk-fb84f4c6d0dc) is to point to latent structure within a model by searching for the “reason” for particular correlations in the model’s output. In this post we’ll walk through a very simple example of using this approach to identify gliders in the game of life.
We’ll use the game of life as our example instead of real physics because it’s much simpler, but everything in the post would apply just as well to identifying “strawberry” within a model of quantum field theory. More importantly, we’re talking about identifying latent structures in physics because it’s very conceptually straightforward, but I think the same ideas apply to identifying latent structure within messier AI systems.
#### Setting: sensors in the game of life
The game of life is a cellular automaton where an infinite grid of cells evolves over time according to simple rules. If you haven’t encountered it before, you can see the rules at [wikipedia](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life) and learn more at [conwaylife.com](https://conwaylife.com/wiki/Main_Page).
A glider is a particular pattern of cells. If this pattern occurs in empty space and we simulate 4 steps of the rules, we end up with the same pattern shifted one square to the right and one square down.
Let’s imagine some scientists observing the game of life via a finite set of sensors. Each sensor is located at a cell, and at each timestep the sensor reports whether its cell is empty (“dead”) or full (“alive”). For simplicity, we’ll imagine just two sensors A and B which lie on a diagonal 25 cells apart. So in any episode our scientists will observe two strings of bits, one from each sensor.
(To be more realistic we could consider physically-implemented sensors, e.g. patterns of cells in the game of life which measure what is happening by interacting with the grid and then recording the information in a computer also built inside the game of life. But that adds a huge amount of complexity without changing any of our analysis, so for now we’ll just talk about these supernatural sensors.)
These scientists don’t know how the game of life works. All they see are the reports of the sensors. They observe that sensor B often fires 100 timesteps after sensor A, which they call an “A-B pattern.” Perhaps sensor A fires on 0.1% of timesteps, and sensor B fires on 0.1% of timesteps, but A-B patterns occur on 0.01% of timesteps instead of the 0.0001% you would have expected if the events were independent.
Scientists hypothesize that A-B patterns occur due to an object traveling between sensor A and sensor B. They call this hypothetical object a “glider,” but they have no idea that a glider consists of a particular pattern of 5 live cells.
To help understand the world, the scientists collect a large training set of sensor readings and train a generative model which can predict those readings, i.e. a function that maps a uniformly random string of bits ε to a sequence of observations for sensor A and sensor B. This generative model is trained to assign a high probability to the sequences in the training set.
The generative model makes good predictions, and in particular it reproduces the surprising frequency of A-B patterns. As a result, the scientists expect that “gliders” must correspond to some latent structure in the model.
#### Explaining A-B patterns in the generative model
Let’s suppose that the generative model uses the random seed ε to fill in a large grid, with 10% of cells alive and 90% dead, simulates the grid for 1000 timesteps, and then outputs the 1000 bits observed by each of the sensors.
To find gliders, we’ll search for a “probabilistic heuristic argument” based on the presumption of independence that A-B patterns are common. We expect that argument to somehow reflect the fact that gliders travel from sensor A to sensor B. We write about formalizing probabilistic heuristic arguments [here](https://arxiv.org/abs/2211.06738), but in the context of this post we will make use of only extremely simple arguments.
The most naive heuristic argument is to treat every cell as independent and ask: at each timestep, how likely is it that a cell is alive?
* At timestep 0, each cell has a 10% probability of being alive.
* We can compute the probability that a cell is alive at timestep 1 if each of it and each of its 8 neighbors is alive independently with probability 10% at timestep 0. This results in a 5% probability (this estimate is exactly correct).
* Similarly, we can compute the probability that a cell is alive at timestep 2 if it and each of its 8 neighbors is alive independently with probability 5% at timestep 1. This results in a ~0.75% probability (this estimate is a significant underestimate, because actually the cells are **not** independent at timestep 1).
* In general, we can inductively compute that at timestep *n* a cell has is alive with probability roughly exp(-exp(*n*)) probability of being alive.
This approximation greatly overestimates the decay rate, because after the first timestep the status of adjacent cells are extremely highly correlated. It also underestimates the limiting density of living cells, because once a group of cells is stable they are likely to remain stable indefinitely (this is called “[ash](http://wwwhomes.uni-bielefeld.de/achim/freq_top_life.html)”). A more sophisticated heuristic argument could take these spatial and temporal correlations into account, but these errors aren’t important for our purposes and we’ll keep working with this extremely simple argument.
This argument predicts that sensor A and sensor B are completely independent, and so the rate of A-B patterns should be the product of (A frequency) and (B frequency). So we haven’t yet explained the surprisingly high rate of A-B patterns.
One way we can try to improve the argument to explain A-B patterns is by explicitly describing a series of events that can give rise to an A-B pattern:
* With probability about 0.004% per cell, the initialization will happen to contain the 5 cells of a glider. (This is an underestimate, because it neglects the fact that gliders can be created at later timesteps; a more sophisticated argument would include that possibility.)
* If a glider appears, then our naive heuristic argument implies that there is a fairly high probability that all of the cells encountered by the glider will be empty. (This is an overestimate, because we underestimated the limiting density of ash.)
* If that happens on the A-B diagonal, then we can simulate the game of life rules to derive that the glider will pass through sensor A, and then pass through sensor B after 100 timesteps.
So overall we conclude that A-B patterns should occur at a rate of about 0.004% per timestep. This is a massive increase compared to the naive heuristic argument. It’s still not very good, and it would be easy to improve the estimate with a bit of work, but for the purpose of this post we’ll stick with this incredibly simple argument.
#### Was that a glider?
Suppose that our scientists are interested in gliders and that they have found this explanation for A-B patterns. They want to use it to define a “glider-detector,” so that they can distinguish A-B patterns that are caused by gliders from A-B patterns caused by coincidence (or different mechanisms).
(Why do they care so much about gliders? I don’t know, it’s an illustrative thought experiment. In reality we’d be applying these ideas to identifying and recognizing safe and happy humans, and distinguishing observations caused by actual safe humans from observations caused by sensor tampering or the AI lying.)
This explanation is simple enough that the scientists could look at it, understand what’s going on, and figure out that a “glider” is a particular pattern of 5 cells. But a lot of work is being done by that slippery word “understand,” and it’s not clear if this approach will generalize to complicated ML systems with trillions of parameters. We’d like a fully-precise and fully-automated way to use this explanation to detect gliders in a new example.
Our explanation pointed to particular parts of the model and said “often X and Y will happen, leading to an A-B pattern” where X = “the 5 cells of a glider will appear” and Y = “nothing will get in the way.”
To test whether this explanation captures a given occurrence of an A-B pattern, we just need to check if X and Y actually happened. If so, we say the pattern is due to a glider. If not, we say it was a coincidence (or something else).
More generally, given an argument for why a behavior often occurs, and a particular example where the behavior occurs, we need to be able to ask “how much is this instance of the behavior captured by that argument?” It’s not obvious if this is possible for general heuristic arguments, and it’s certainly more complex than in this simple special case. We are tentatively optimistic, and we at least think that it can be done for cumulant propagation in particular (the heuristic argument scheme defined in D [here](https://arxiv.org/pdf/2211.06738.pdf#page=43)). But this may end up being a major additional desideratum for heuristic arguments.
### Some subtleties
#### Handling multiple reasons
We gave a simple heuristic argument that A-B patterns occur at a rate of 0.004%. But a realistic heuristic argument might suggest many *different reasons* that an A-B pattern can occur. For example, a more sophisticated argument might identify three possibilities:
* With probability 0.004% a glider travels from A to B.
* With probability 0.0001% A and B both fire by coincidence.
* With probability 0.000001% an [acorn](https://conwaylife.com/wiki/Acorn) appears between A and B, doesn’t meet any debris as it expands, and causes an A-B pattern. (Note: this would have a much smaller probability given the real density of ash, and I’m not sure it can actually give rise to an A-B pattern, but it’s an illustrative possibility that’s more straightforward than the actual next item on this list.)
Now if the scientists look at a concrete example of an A-B pattern and ask if this explanation captures the example, they will get a huge number of false positives. How do they pick out the *actual* gliders from the other terms?
One simple thing they can do is *be more specific* about what observations the glider creates. Gliders cause most A-B patterns, but they cause an even larger fraction of the A-B **correlation**. But “A and B both fire by coincidence” doesn’t contribute to that correlation at all. Beyond that, the scientists probably had other observations that led them to hypothesize an object traveling from A to B — for example they may have noticed that A-B patterns are more likely when there are fewer live cells in the area of A and B — and they can search for explanations of these additional observations.
However, even after trying to point to gliders in particular as specifically as they can, the scientists probably can’t rule everything else out. If nothing else, it’s possible to create a machine that is trying to convince the scientists that a glider is present. Such machines are possible in the game of life (at least if we use embedded sensors), and they do explain some (vanishingly tiny!) part of the correlation between sensor A and sensor B.
So let’s assume that after specifying gliders as precisely as we can, we still have multiple explanations: perhaps gliders explain 99.99% of the A-B correlation, and acorns explain the remaining 0.01%. Of course these aren’t labeled conveniently as “gliders” and “acorns,” it’s just a big series of deductions about a generative model.
Our approach is for scientists to pick out gliders as the *primary* source of the A-B correlation on the training distribution. We’ll imagine they set some threshold like 99.9% and insist that gliders must explain at least 99.9% of the A-B correlation. There are two ways we can leverage this to get a glider detector rather than a glider-or-acorn detector:
* We can search for the *simplest* argument that captures most of the effect on the training distribution, and hope that the simplest way to argue for this effect ignores all of the non-central examples like acorns. This was our initial hope (gestured at in the document [Eliciting latent knowledge](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.mm8czcz6whkh)); while we think it works better than other reporter regularization strategies, we don’t think it **always** works and so aren’t focusing on it.
* We can search for *any* argument that captures most of the effect on the training distribution without capturing the new example. If we find any such argument, then we conclude that the new example is possibly-not-a-glider. In this case, we find that simply dropping the part of the explanation about acorns still explains 99.99% of the A-B correlation, and so an acorn will always be flagged as possibly-not-a-glider. This is our current approach, discussed in more detail in [Mechanistic anomaly detection and ELK](https://ai-alignment.com/mechanistic-anomaly-detection-and-elk-fb84f4c6d0dc).
#### Recognizing unusual gliders
Consider an example where a [glider gun](https://conwaylife.com/wiki/Gosper_glider_gun) randomly forms and emits a sequence of gliders, each causing an A-B pattern. Intuitively we would say that each A-B pattern is caused by a glider. But the methodology we’ve described so far probably wouldn’t recognize this as an A-B pattern caused by a glider. Instead it would be characterized as an anomaly. That’s an appropriate judgment, but the scientists really wanted a glider detector, not an anomaly detector!
The problem is that our argument for A-B patterns specifies not only the fact that a glider passes from A to B, but also the fact that gliders are formed randomly at initialization. If a glider is formed in an unusual place or via an unusual mechanism, then the argument may end up not applying.
If the scientists had examples where A-B patterns were caused by glider guns, then they could include those and fix the problem by considering only arguments that capture those cases as well. But they may not have any way to get those labels, e.g. because glider guns are only ever produced in worlds involving complex AI systems who could also tamper with sensors directly in a way that fools the human labelers.
Without the ability to create glider guns on their own, how can the scientists point more specifically to the concept of “glider,” without inadvertently pointing to the entire causal process that produces A-B patterns including the events causally upstream of the glider?
One way to point to gliders is to use two sensors to triangulate: gliders are a common cause of sensor A firing and of sensor B firing, and scientists could try to pick out gliders as the “latest” common cause. This potentially gives us a way to point to a particular point on the causal chain to A-B patterns, rather than indiscriminately pointing to the whole thing.
In practice they would also have many other regularities and relationships they could use to pick out gliders. But this simplest example seems to capture the main idea. We’ll try to make this idea precise in the next section.
Rejecting unusual gliders seems like it might be a serious problem — having an anomaly detector could still be useful but would not constitute a solution to ELK. So a lot depends on whether there is some way to actually point to gliders rather than just flagging anything unusual as anomaly.
#### Can we point to the latest common cause?
We might be able to implement this by searching over explanations that capture (i) the A-B correlation on the training set, (ii) the fact that sensor A fires on the new example, (iii) the fact that sensor B fires on the new example. We say that the A-B correlation is “potentially not due to a glider” if there is any explanation of this form which *fails* to capture the A-B correlation on the new example.
Our hope is that any argument which captures the A-B patterns on the training set needs to argue that “if a glider passes through A and encounters no debris, then you get an A-B pattern.” And any argument which predicts that sensor B fires on the new example needs to establish that “On the new input a glider passes through A and encounters no debris.” So an argument which does both things would necessarily need to capture the A-B pattern on the new example.
In contrast, if sensors A and B had fired for separate reasons (or because of some other mechanism like an acorn) then we could imagine an argument that separately argues that each of A and B fires without arguing that an A-B pattern occurs.
One obvious subtlety is that we are separately talking about predicting A, predicting B, and predicting the A-B correlation on a *single* example, and it’s not clear what it even means to talk about the A-B correlation on a single example. We think it is possible to make sense of this in the context of cumulant propagation, but we don’t know how generally that will hold.
Overall we’re not really able to evaluate this proposal concretely given the informal way we’ve talked about heuristic arguments so far, and we’re just not sure if it will work.
#### Things that are not gliders
If this all works as we’re sketching, we can speculate about which A-B patterns would be described as “due to gliders” vs “not due to gliders:”
* **Not a glider**: a glider passes through A and a second glider passes through B.
* **Not a glider**: a glider passes through A and runs into debris, but then a new glider is immediately created nearby traveling towards B.
* **Probably not a glider**: while traveling from A to B a glider runs into a piece of debris and starts falling apart, but then hits a second piece of debris that allows it to recover. This effect might appear in the initial argument, but it can almost certainly be separated from the “don’t encounter any debris” component of the argument. If so, then it will only count as a glider if the prevalence of this double-debris scenario on the training set is above the threshold the scientists set for “what fraction of the training set behavior is due to gliders.” We wouldn’t expect it to be for a particular random type of crash that allows a glider to survive, **unless** that kind of crash was very common or else there was a systematic reason that many different kinds such crashes tended to allow gliders to survive without changing their travel time.
* **Hopefully a glider**: a glider is created by a glider gun or some unusual mechanism. This depends on the speculation from the last two sections.
* **Hopefully a glider**: a glider passes through A, and makes it to B only because there is some special glider-escorting-machinery clearing out all the debris in front of it. This is similar to the last point and depends on the same speculation, but also depends on details of exactly how we specify gliders. If we identify glider travel as the cause of the exact 100 timestep difference between the A and B sensors firing, it’s more likely that this would count as a glider (since the timestep difference is caused by the same mechanism, even if the successful gliding was not).
#### Special cases and false positives
Suppose that for an arbitrary event X (like X=”cell 33004 is on, cell 40092 is off…”) we can divide the argument “sometimes gliders appear and travel from A to B” into two sub-arguments:
* “Sometimes X happens and a glider appears and travels from A to B”
* “Sometimes X doesn’t happen and a glider appears and travels from A to B”
Such that the first sub-argument captures the A-B pattern on inputs where X is true and the second captures the A-B pattern inputs where X is false.
Then for *any* given new example, we could pick some property X which is true for the new example but rare on the training distribution. By selecting the sub-argument that assumes X is false, we can always find an argument explaining the training behavior but not the new example. As a result, we will *always* conclude that a new input is potentially anomalous.
It is possible to special case proofs in this way: I can prove (Y is common) by proving (X is common) and X → Y, even if the proof of X → Y completely ignores the assumption X.
We could try to rule this out by looking for the *simplest* argument, as mentioned above. That might be a workable approach, but it’s not the one ARC is currently exploring.
Instead we are hoping to rule this out structurally. There are two ways this could work, both of which seem plausible:
* Suppose that I have an argument that X is common and that X → Y, but the argument for X → Y never actually uses the assumption of X. Then we could hope that this argument still “captures” an input where X is false, because it is still possible to carry out the X → Y argument without actually using the assumption of X.
* Rather than searching over *all* arguments that explain the training behavior, to see if any of them fail to explain the new example, we could find a *single* argument that explains the training behavior and then look for “subsets” of that argument. If we take this approach and start with an argument π for Y that makes no reference to X, then we could hope that an argument “probably X, and X → Y” isn’t a subset of π.
### A related problem: finding strawberries in the real world
Eliezer sometimes [talks about](https://twitter.com/esyudkowsky/status/1070095840608366594) the problem of getting a very smart AI to put a strawberry on a plate. If successful, ARC’s approach to ELK should directly address that problem. In particular, it should be possible to evaluate whether a proposed action puts an actual real strawberry on a particular plate. And that solution should work even if the proposed actions are very complex, e.g. if an AI needs to carry out a complicated military campaign to defend a convoy delivering the last surviving strawberry. Moreover, it should work even if actions of similar complexity could replace the strawberry with a convincing fake (or tamper with sensors) in a way that humans would never notice.
(In Eliezer’s most recent discussions of this problem he specifies that the AI should actually make two strawberries that are “identical down to the cellular but not molecular level.” Defining “identical down to the cellular level” using our approach is more complicated and we think less interesting, though still possible; “put an actual real strawberry on a plate” is more important and more similar to the kinds of subproblems we need to solve.)
In our approach, we’d try to point to a strawberry as the cause of a certain strawberry-observation-correlation we’ve noticed: the shape, the red color, the green stem, the seeds, the flavor, and so on. So to evaluate whether an action is predicted to put a strawberry on a plate we’d evaluate whether it’s predicted to lead to all of those observations, and whether it does so based on the *normal* common cause.
For concreteness, let’s imagine an AI that has a probability distribution over initial states of affairs, together with a detailed physical model for how those states evolve over time. “Strawberry” exists as part of the prior over initial states — to describe the world reasonably you need to know that supermarkets contain objects with the whole correlated set of strawberry-properties. And this gives rise to a number of arguments that explain the strawberry-observation-correlation:
* The physical properties of strawberries are correlated in the prior over initial states. There is a heuristic argument that object properties are often stable under the passage of time, and so the world contains lots of objects with the strawberry-properties. And there is a heuristic argument that strawberry-properties give rise to strawberry-observations (e.g. that light reflecting off of an object containing strawberry pigments will appear red).
* The prior over the world also contains strawberry seeds, with correlated strawberry-genomes. There is a heuristic argument that when those seeds grow they will produce berries with the strawberry-properties, and then we proceed as before to see that such objects will lead to strawberry-observation-correlations. If the model has seen long enough time periods, we’d also need to make arguments about how the grandchildren of strawberries themselves have strawberry-properties, and so forth.
* We’ve assumed a detailed physical model starting from a distribution over initial conditions. But you could also imagine more heuristic models that sometimes treated strawberries as ontologically fundamental even after initialization (rather than treating them as a set of atoms) or whose initial conditions stretched all the way back before the evolution of strawberries. We won’t talk about those cases but the arguments in this section apply just as well to them.
Now we can imagine the same approach, where we say that something “there is a strawberry on the plate” if we make the strawberry-observation and any argument that explains 99.9999% of the strawberry-observation-correlation also captures the strawberry-observation in this case. What would this approach classify as a strawberry vs not a strawberry?
* **Not a strawberry**: an imitation strawberry constructed in a lab in New Jersey to exactly imitate the appearance and flavor of a strawberry. In this case the strawberry-observations are not due to the physical strawberry-properties. I can explain more than 99.9999% of the strawberry-observation-correlation in the training data without ever talking about the fact that sometimes people try to make objects that look and taste and feel like strawberries. (This is only true if I define my strawberry-observations stringently enough that there are *very* few fake strawberries that pass all my tests in the “training” dataset I’ve used to define strawberries.)
* **Not a strawberry:** an atomic replica of a strawberry. Now the strawberry-observations *are* due to the physical strawberry-properties, but the correlation between all of those strawberry-properties is not due to the normal reason. We can imagine someone copying the atoms to reflect parts of the strawberry but not others, and the correlation is induced by facts about the strawberry-copying machine rather than the correlations in the prior. That is, I can explain 99.9999% of the co-occurence of strawberry-properties without ever arguing that people sometimes make atomic replicas of strawberries.
* **Not a strawberry**: a copy of a strawberry made by sequencing a strawberry, synthesizing an identical genome, growing the resulting plant, and picking its berries. The strawberry-properties are now due to the same genes unfolding through the same biological processes, but now the gene-correlation is occurring for an unusual reason: in order to explain it I need to make a heuristic argument about the sequencing and synthesis process, and I an explain 99.9999% of the training set behavior without making such arguments.
* **Strawberry:** astrawberry picked by a robot from a field. Now the correlations *are* due to the usual fact, namely that my prior over states involves a bunch of strawberries with correlated strawberry-properties that are preserved unless something bad happens. We can’t explain 99.9999% of the correlation on the training set without making heuristic arguments about how strawberries can be transported while preserving the relevant physical strawberry-properties. But note: that if robots picking strawberries is unprecedented, this depends on the same complexities discussed above where we need to distinguish *explanation for the correlation* from *explanation for the individual properties* *arising in this case* (because the heuristic argument for strawberry-observations depends on strawberries actually getting in front of the camera, and so you need to make a heuristic arguments about humans picking and delivering strawberries without damaging them which may not apply to robots picking and delivering strawberries).
* **Not a strawberry**: a strawberry picked by a robot from a field, smashed in transit, and then carefully reconstructed to look as good as new. Now the strawberry-observations are still produced by the physical strawberry-properties, but those properties are preserved by the reconstruction process rather than by the usual heuristic argument about strawberries preserving their properties unless they are disturbed. But note: this depends on exactly how we define strawberry and what we take to be a strawberry-observation, ideally the reconstructed strawberry counts as a strawberry iff the smashed up strawberry would have counted and that’s up to us.
It’s interesting to me that an atomic replica of a strawberry would clearly *not* be considered a strawberry. Initially I thought this seemed like a bug, but now I’m pretty convinced it’s exactly the right behavior. Similarly, if I ask my AI to move me from point A to point B, it will *not* consider it acceptable to kill me and instantly replace me with a perfect copy (even if from its enlightened perspective the atoms I’m made of are constantly changing and have no fixed identity anyway).
In general I’ve adopted a pretty different perspective on which abstractions we want to point to within our AI, and I no longer think of “a particular configuration of atoms that behaves like a strawberry” as a plausible candidate. Instead we want to find the thing inside the model that actually gives rise to the strawberry-correlations, whether that’s ontologically fundamental strawberries in the prior over initial states, or the correlation between different strawberries’ properties that emerges from their shared evolutionary history. None of those are preserved by making a perfect atomic copy. |
dcdf0d4f-2264-4b73-a835-96934cc599c6 | trentmkelly/LessWrong-43k | LessWrong | Prophetic Hazard
Some beliefs are self-fulfilling prophecies. These involve causal sequences of the form
> (1) X believes that ‘X is p.’
>
> (2) X therefore does b.
>
> (3) Because of (2), X becomes p.
SFPs can be desirable or undesirable to the believer, depending on how they affect them. For example, "I am confident" might be desirable, whereas "I have social anxiety" might be undesirable.
A prophetic hazard is created when someone proposes an idea that can become an undesirable self-fulfilling prophecy, or affirms an existing one. Below are some examples of prophetic hazards at a personal level:
* Telling a pessimistic person "You are such a pessimist".
* Telling someone who is suffering depression "You are so depressed".
* Reminding an old person that they are old.
* Reminding a terminally ill patient that they don't have much to live, thereby distressing them and hastening the process.
* Telling a child that they are stupid after they make a mistake.
Prophetic hazards can also exist at a societal level:
* The following belief is a common feature of right-wing ideologies: "Social inequality is a natural feature of societies, and is therefore inevitable."
* Bernaysian approach to democracy: "Masses of working people are like a herd of sheep that are unfit to rule themselves."
Prophetic hazards are dangerous because they reflect a certain reality, or rather, a possibility of becoming real. They push people towards slippery slopes where they can lose control. They pave paths of least resistance to catastrophes which can otherwise be avoided through effort. |
507fabd9-6787-41fd-aa04-70cc27f06cc3 | trentmkelly/LessWrong-43k | LessWrong | More Rhythm Options
Few instruments do a good job as the only rhythm instrument in a dance band; in my 2014 sample I only saw guitar and fiddle. I can't play guitar for dancing anymore because of my wrists, and the piano has to give up a lot in exchange for its large range. A goal I've had for a long time is to figure out how to get the same full sound from something built around a mandolin.
As a rhythm instrument, the way I play it, the mandolin has a percussive bite and drive that's hard to get with the piano. This drive contributes a lot to the dancing, and is something I really enjoy about a mandolin-piano rhythm section. Take away the piano, though, and everything is high frequency.
I've played with a bunch of ideas here for augmenting my mandolin playing:
* DIY organ pedals.
* Build a computer vision system that maps from hand shape and position to chord, and then choose bass notes from the chord. Trigger the bass notes with foot pedals.
* Make a hat with a tilt sensor, and use head angle to choose bass notes. Foot pedals as before.
* Use vocals, perhaps processed, to fill out the sound.
* Whistle into a microphone, which controls a bass synthesizer, so I can whistle bass lines.
Recently I tried a new combination:
* Whistle into a microphone to select bass notes, trigger the bass notes with foot pedals.
Here's a video:
(youtube)
I'm running my standalone pitch detector which translates the whistling into MIDI, with pitch bend to send fractional pitch. I tell my MIDI router what key and mode I'm in, and it listens for I, IV, V, and either vi (minor) or VII (mixo) by picking the nearest option. I have this driving both a bass that's triggered by the foot pedals, and an atmospheric droney pad that just runs. I have the pad set to only change notes on a pedal tap, however.
It's not as flexible as the bass whistle, because I need to choose in advance what key and mode to play in and it only does four bass notes, but it also is much less likely to make weird awkwar |
973088ec-cc27-4b68-99e7-f39ce7a63b25 | trentmkelly/LessWrong-43k | LessWrong | Things Solenoid Narrates
I spend a lot of time narrating various bits of EA/longtermist writing.
The resulting audio exists in many different places. Surprisingly often, people who really like one thing don't know about the other things. This seems bad.[1]
A few people have requested a feed to aggregate 'all Solenoid's narrations.'
Here it is. (Give it a few days to be up on the big platforms.) I'll update it ~weekly.[2]
Solenoid Narrates
And here's a list of things I've made or am working on, shared in the hope that more people will discover more things they like:
Human Narrations
* Astral Codex Ten Podcast
* ~920 episodes so far including all non-paywalled ACX posts and SSC archives going back to 2017, with some classic posts from earlier.
* Archive. Patreon.
* LessWrong Curated Podcast
* Human narrations of all the Curated posts. Patreon.
* AI Safety Fundamentals
* Narrations of most of the core resources for AISF's Alignment and Governance courses, and a fair few of the additional readings.
* Alignment, Governance
* 80,000 Hours
* Many pages on their website, plus their updated career guide.
* EA Forum Curated podcast
* This is now AI narrated and seems to be doing perfectly well without me, but lots of human narrations of classic EA forum posts can be found in the archive, at the beginning of the feed.
* Metaculus Journal
* I'm not making these now, but I previously completed many human narrations of Metaculus' 'fortified essays'.
* Radio Bostrom:
* I did about half the narration for Radio Bostrom, creating audio versions of some of Bostrom's key papers.
* Miscellaneous:
* Lots of smaller things. Carlsmith's Power-seeking AI paper, etc.
AI Narrations
Last year I helped TYPE III AUDIO to create high-quality AI narration feeds for EA Forum and LessWrong, and many other resources.
* Every LessWrong post above 30 karma is included on this feed.
* Spotify
* Every EA Forum post above 30 karma is included on this feed:
* Spot |
9dfffc24-46b8-432f-9516-e794f814125b | StampyAI/alignment-research-dataset/arxiv | Arxiv | Value Alignment, Fair Play, and the Rights of Service Robots
Value Alignment and Turing’s Test
---------------------------------
A substantial portion of contemporary research into ethics and artificial intelligence is devoted to the problem of “value alignment” (hereafter VA) [[Allen, Smit, and
Wallach2005](#bib.bibx2), [Yudkowsky2008](#bib.bibx49), [Yampolskiy2013](#bib.bibx48), [Soares and
Fallenstein2014](#bib.bibx42), [Russell, Dewey, and
Tegmark2015](#bib.bibx36), [Arnold, Kasenberg, and
Scheutz2017](#bib.bibx6)]. Rather than deriving ethically appropriate action from first principles or from a direct recognition of the good, VA takes as its goal the (presumably simpler) task of designing AI that conforms to human values. AI that reliably conforms to human values is said to be “aligned”. A primary concern in this literature is to establish methods that guarantee alignment, potentially within tight parameters, since it is argued that even small and seemingly innocuous cases of misalignment can quickly develop into a serious threat to general human safety [[Yudkowsky2008](#bib.bibx49), [Bostrom2012](#bib.bibx9), [Babcock, Kramár, and
Yampolskiy2016](#bib.bibx7)].
There are reasons to be optimistic about VA as an approach to AI ethics, perhaps most significantly that the framework of “alignment” seems to lend itself to contemporary machine learning techniques like supervised learning [[Mohri, Rostamizadeh, and
Talwalkar2012](#bib.bibx32)], where machines systematically improve their performance relative to a specified training set. There are also reasons to be skeptical that today’s machine learning techniques are adequate for generating the complex forms of alignment required for participating in human moral communities [[Arnold, Kasenberg, and
Scheutz2017](#bib.bibx6)]. However, rather than critiquing the VA literature directly, my goal in this paper is to reflect on connections between the discourse on value alignment and the historical discussion of Turing’s notorious “imitation game”, with the hopes that lessons from the latter might better inform our developing discussions of the former.
Turing’s test, originally offered as an alternative to the question “can machines think?”, has since become a standard benchmark for evaluating the intelligence of machines [[Turing1950](#bib.bibx46), [Saygin, Cicekli, and
Akman2000](#bib.bibx39), [Copeland2004](#bib.bibx16), [Copeland et al.2017](#bib.bibx15)]. The test revolves around a comparison to human performance: if the machine cannot be correctly identified by a human interrogator after a few minutes of conversation, it is said to ”pass” the test and can be called intelligent. The central criterion for passing the test is indistinguishability from human behavior [[Dretske1997](#bib.bibx21), [Saygin, Cicekli, and
Akman2000](#bib.bibx39)]. We might describe the demand for indistinguishability in terms of “behavioral alignment”: a machine is behaviorially aligned just in case it behaves indistinguishably from a human. [[Allen, Varner, and
Zinser2000](#bib.bibx3)] already recognized that since the set of moral behaviors is a proper subset of the total behaviors, what today is called “value alignment” can be interpreted as a special case of behavioral alignment. From this insight they propose a Moral Turing Test (MTT) [[Allen, Wallach, and
Smit2006](#bib.bibx4), [Wallach and Allen2008](#bib.bibx47), [Arnold and Scheutz2016](#bib.bibx5)]. The MTT is passed by a machine that behaves indistinguishably from a human in a conversation about moral actions.111[[Arnold and Scheutz2016](#bib.bibx5)] argue that even a Total MTT variation, where evaluation of behaviors is not restricted to conversation but encompasses the full range of moral behaviors, is not sufficient for saving the MTT as a viable ethical criterion. For this reason, I will not dwell on the restriction to conversation behaviors in this paper. See [[Harnad1989](#bib.bibx28)]. Just as passing the original Turing Test is supposed to suggest a degree of intelligence on the basis of behavioral alignment, so too does passing the MTT suggest a degree of moral agency on the basis of moral alignment.
Although the Turing Test is widely known and discussed, it is generally not accepted as a reliable test for intelligence. Criticisms of Turing’s test abound in the literature, perhaps best summarized by Dretske: ”despite indistinguishability, all is dark…” [[Dretske1997](#bib.bibx21)]. Critics worry that the mere imitation of human behavior is not sufficient for either intelligent or moral agency, and so Turing’s test doesn’t tell us what we want to know [[Searle1980](#bib.bibx40), [Dennett1981](#bib.bibx19), [Dreyfus1992](#bib.bibx22)]. Here the theoretical goals of the MTT come apart from those of VA. Researchers concerned about value alignment don’t care whether the machine is a genuine moral agent; a pure automaton (like the Paperclip Maximizer [[Bostrom2003](#bib.bibx8)]) might still pose a threat to humanity if it is sufficiently misaligned. And conversely, a sufficiently aligned machine is no guarantee of moral agency, just as convincing automation is no guarantee of intelligent agency. For this reason, strong rejections of MTT sit awkwardly in the literature alongside expansive research programs into the constraints on alignment, even while the former is a clear example of the latter. For instance, [[Arnold and Scheutz2016](#bib.bibx5)] criticize the MTT as a standard for building moral machines, and yet they go on in [[Arnold, Kasenberg, and
Scheutz2017](#bib.bibx6)] to develop some theoretical constraints on applying machine learning to value alignment, while drawing no strong connections between these discussions. The effect is to make it appear as if the MTT is either irrelevant or unhelpful to the to the discussion of value alignment.
Arnold et al. reject the MTT on several grounds, including that imitation cannot serve as the basis for intelligent moral agency. Echoing the traditional criticisms, they write, ”What becomes ever clearer through explicating the conditions of an MTT is that its imitative premise sets up an unbridgeable gulf between its method and its goal” [[Arnold and Scheutz2016](#bib.bibx5)] The framework of an ”unbridgeable gap” is familiar from the philosophy of mind [[Dennett1991](#bib.bibx20)], and seems to render Turing’s proposal inadequate for the task of developing genuine moral agents. However, if our task is not to develop intelligent moral agents per se but merely to align our machines with our values, than the MTT may continue to prove useful. In the next section, I argue that Turing’s principle of ”fair play for machines” (FP) [[Turing1947](#bib.bibx44)] provides a non-imitative ground for evaluating the alignment of machines. I argue that the FP avoids many of the classic criticisms of Turing’s test, and provides a satisfying method for applying Turing’s insights to the problem of value alignment distinct from the MTT.
Fair Play for Machines
----------------------
[[Proudfoot2017](#bib.bibx34)] points to a variety of sources in developing a rich, comprehensive account of the Turing test (“from every angle”). Special emphasis is given to his discussion of a “little experiment” [[Turing1948](#bib.bibx45)] involving chess playing computers in an experimental design that is clearly the germ for his landmark 1950 paper. However, curiously missing from Proudfoot’s analysis (and mentioned only in passing in [[Leavitt2017](#bib.bibx31)]), is a short passage from the end of Turing’s 1947 Lecture on the Automatic Computing Engine to the London Mathematical Society [[Turing1947](#bib.bibx44), [Copeland2004](#bib.bibx16), [Hodges2012](#bib.bibx29)]. Here Turing is also concerned with evaluating the performance and ”I.Q.” of chess-playing computers, which suggests this passage should be read alongside his 1948 and 1950 papers for a full appreciation of the developing proposal. Since it is so regularly overlooked, I quote Turing’s argument below in full, with paragraph breaks and emphasis added:
>
> “It might be argued that there is a fundamental contradiction in the idea of a machine with intelligence. It is certainly true that ‘acting like a machine’ has become synonymous with lack of adaptability. But the reason for this is obvious. Machines in the past have had very little storage, and there has been no question of the machine having any discretion. The argument might however be put into a more aggressive form. It has for instance been shown that with certain logical systems there can be no machine which will distinguish provable formulae of the system from unprovable, i.e. that there is no test that the machine can apply which will divide propositions with certainty into these two classes. Thus if a machine is made for this purpose it must in some cases fail to give an answer. On the other hand if a mathematician is confronted with such a problem he would search around a[nd] find new methods of proof, so that he ought eventually to be able to reach a decision about any given formula. This would be the argument.
>
>
>
>
> Against it I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words then, if a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretense at infallibility.
>
>
>
>
> To continue my plea for ‘fair play for the machines’ when testing their I.Q. A human mathematician has always undergone an extensive training. This training may be regarded as not unlike putting instruction tables into a machine. One must therefore not expect a machine to do a very great deal of building up of instruction tables on its own. No man adds very much to the body of knowledge, why should we expect more of a machine? Putting the same point differently, the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards. The game of chess may perhaps be rather suitable for this purpose, as the moves of the machine’s opponent will automatically provide this contact.” [[Turing1947](#bib.bibx44), [Copeland2004](#bib.bibx16)]
>
>
>
There are many striking things to note about this passage. First, Turing is responding to a critic of the very idea of machine intelligence, whose argument points to some necessary (and therefore unbridgeable) gap between the performance of humans and machines. In this case, the critic appeals to Gödel’s incompleteness theorem [[Gödel1931](#bib.bibx26), [Smullyan2001](#bib.bibx41)] as evidence of such a gap, an objection he returns to under the heading of “The Mathematical Objection” in [[Turing1950](#bib.bibx46)]. Recall that Turing’s major mathematical contribution [[Turing1937](#bib.bibx43)] is the formal description of a “universal computer”, which can in theory perform the work of any other computer. On my interpretation [[Estrada2014](#bib.bibx23)], the universality of his machines is what ultimately convinces Turing that computers can be made to think. Without any assumption of behaviorism or appeal to a principle of imitation, the syllogism runs as follows: if the brain is a machine that thinks, and a digital computer can perform the work of any other machine, then a digital computer can think. This syllogism is both valid and sound. However, Turing recognizes that Gödel’s theorem shows ‘‘that with certain logical systems there can be no machine which will distinguish provable formulae of the system from unprovable’’. This straightforwardly implies that there are some things that even Turing’s universal machines cannot do. This result does not invalidate the syllogism above. Still, Turing’s critics draw an inference from (1) there are some things machines cannot do, to (2) humans can do things that (mere) machines cannot do. Although this inference is clearly invalid,222Turing’s original response to the Mathematical Objection remains satisfying: “The short answer to this argument is that although it is established that there are limitations to the powers of any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect.” [[Turing1950](#bib.bibx46)] arguments of this form persist even among respected scholars today [[Penrose1999](#bib.bibx33), [Floridi2016](#bib.bibx25)]. The passage from his 1947 Lecture shows Turing contending with this perennial challenge several years before his formal presentation of the imitation game. In other words, Turing was clearly aware of an “unbridgeable gap” objection, and both his “little experiment” and the principle of fair play serve as ingredients in his response. A full appreciation of Turing’s position in this debate ought to take this evidence into account.
Second, the core of Turing’s response is to offer a “plea” for what he calls “fair play for machines”. This suggestion is proposed in the context of “testing their I.Q.”, making explicit the connection between FP and the developing framework of Turing’s test. Essentially, Turing is worried about a pernicious double standard: that we use one standard for evaluating human performance at some task, and a more rigorous, less forgiving standard for evaluating the machine’s performances at the same task. Other things equal, a double standard is patently unfair, and thus warrants a plea for ”fair play”. Of course, one might worry that the mere fact that the performance comes from a machine implies that other things aren’t equal. Since machines are different from humans, they ought to be held to different standards. But on my interpretation , Turing is primarily motivated by a conviction that universal computers can perform the work of any other machine, and so humans and computers are not essentially different. Turing’s test isn’t designed to prove that machines can behave like humans, since in principle this follows from the universality of the machines. Instead, the test is designed to strip the human evaluator of his prejudices against the machines, hence the call for fair play.
Notice that calling a standard of judgment unfair does not imply that the machines treated unfairly can “think”. Therefore, FP alone cannot serve as a basis for evaluating the intelligence of machines in the style of the Turing Test. And indeed, Turing’s argument makes clear that his appeal to FP is concerned not with the intelligence of the machine, but instead with the standards used to evaluate the machine’s performance. After all, Turing’s plea is made in defense of machines that are expected to be infallible, and whose performance might be compromised (by “occasionally providing wrong answers”) in order to more closely approximate the performance of a human. Turing’s point is that we’d never demand a human mathematician to occasionally make mistakes in order to demonstrate their intelligence, so it’s strange to demand such performance from the machine. If Turing’s test is motivated by a call for ”fair play for machines”, this should inform our interpretation of the test itself. Since the principle of fair play does not depend on an imitative premise, the rejection of Turing’s test on this basis seems too hasty.
Finally, the quoted passage closes by highlighting the phrase ‘‘fair play for machines’’ again333It may be interesting to consider why the phrase ”fair play for machines” doesn’t appear in the 1950 paper. Many of the arguments from the passage appear in the final section of his 1950, under the subsection “Learning Machines”, where he proposes building “a mind like a child’s” in response to Lovelace’s Objection [[Estrada2014](#bib.bibx23)]. In this section he proposes a number of games to play with machines, including chess and twenty questions. He also expresses a worry that “The idea of a learning machine may appear paradoxical to some readers.” Turing’s 1950’s paper is self-consciously written to a popular audience; perhaps Turing worried that a plea for ”fair play for machines”, including those that aren’t even intelligent, might also confuse his readers too much, and undermine the constructive argument he’s given. Hopefully, readers 70 years later are not so easily confused., and arguing that “the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards.” Clearly, Turing is approaching the challenge of evaluating the performance of machines, even in purely intellectual domains like chess, as a problem of behavioral alignment. Moreover, Turing argues that for machines to achieve that alignment, they must be allowed certain privileges, in the interest of “fair play”. Specifically, Turing argues that if we expect the machine to learn our standards, we must afford access to our behavior. In other words, he’s arguing that constraints on human behavior are necessary to achieve alignment: in how we evaluate and interact with out machines. This perspective is rare even in the alignment literature today, where concerns are overwhelmingly focused on how to constrain the machine to stay within bounds of acceptable human behavior.
More importantly, Turing suggests that we must be willing to interact with machines, even those that aren’t intelligent, if we expect these machines to align to our standards. And this is precisely the kind of interaction Turing’s proposed imitation game encourages. These reflections open a new route to defending the importance of Turing’s test in today’s alignment literature. Turing’s test is usually understood as a benchmark for intelligence, and the MTT as a benchmark for moral agency. Commentary traditionally recognizes Turing’s worries about standards of evaluation [[Saygin, Cicekli, and
Akman2000](#bib.bibx39), [Arnold and Scheutz2016](#bib.bibx5)], but they interpret Turing’s imitation game as attempting to settle on some specific standard of evaluation: namely, indistinguishability from human performance, or perfect imitation, as judged by another human. If the machine meets this standard, the machine is considered intelligent. We might call this a “benchmark” interpretation of Turing’s test, or BTT. The MTT is an instance of BTT that sets the benchmark to imitate human moral behavior, for example. Many machine learning applications today will present themselves as meeting or exceeding human performance (at discrimination tasks, image recognition, translation, etc.), a legacy of Turing’s influence on the field. Criticisms of Turing’s test focus on whether this benchmark is appropriate for evaluating the machine’s performance, with most concluding it is not an adequate measure of general intelligence. But the principle of fair play suggests Turing is less interested in setting a particular benchmark for intelligence, and more concerned with establishing that the standards for evaluation that are fair. Call this interpretation the Fair Play Turing Test FPTT. A machine passes the FPTT when it meets the same standards of evaluation used to judge human performance at the same task. On this interpretation, Turing’s imitation game is meant to describe a scenario of ”fair play” where the human biases against machines can be filtered out, and the machine could be judged in their capacity to carry a conversation by the same standards as any other human. We typically think of someone who can hold a conversation as being intelligent, so if a machine can also hold a conversation without being detected as non-human, we should judge it intelligent too. Not because conversation is some definitive marker of intelligence, as the BTT interpretation suggests, but rather because conversation is a standard that is often used to evaluate the intelligence of humans, and the principle of fair play demands holding machines to the same standards. On this interpretation, the sort of hostile interrogation typically seen in demonstrations of Turing’s test [[Aaronson2014](#bib.bibx1)] seems straightforwardly unfair, since we wouldn’t expect an intelligent human to hold up well under hostile interrogation either.
Since the principle of FP does not depend on imitation, the FPTT works in a subtly different way than the BTT. Passing the FPTT doesn’t merely imply a machine performs at human levels; passing FPTT implies more strongly that the machine performs at these levels when evaluated by the same standards used to judge human performance. For instance, we usually aren’t skeptical of mere imitation when talking to a human, so raising this concern in the context of evaluating machine could signal a change in the standards of evaluation, and thus a violation of FP. Cases where machine performance is expected to diverge significantly from humans might warrant a multiplicity of standards. We might, for instance, expect driverless vehicles to adhere to more rigorous safety standards than we typically hold human drivers. Recognizing these misaligned standards as a violation of fair play doesn’t necessarily imply the situation is unethical or requires correction. Instead, identifying a failure of fair play draws attention to the multiplicity of standards for evaluating a task, and the lack of a unifying, consistent framework for evaluating all agents at that task. The framework of FPTT easily extends to evaluating performance at tasks other than “general intelligence” where we are interested in consistent, unifying standards, including the task of moral alignment in particular contexts. [[Arnold and Scheutz2016](#bib.bibx5)] reject the MTT as a standard for evaluating moral agency on the basis of its imitative premise. But FPTT doesn’t depend on an imitative premise, and only checks for alignment with the standards used to judge humans at a task. In the next section, I argue that this framework of fair play has direct application for evaluating the alignment of robots operating in our world.
The Rights of Service Robots
----------------------------
Historically, the question of robot rights has turned on questions of personhood [[Gunkel2012](#bib.bibx27), [Bryson, Diamantis, and
Grant2017](#bib.bibx11)]. Conditions on personhood typically involve both cognitive and moral attributes, such as ”recognizing the difference between right and wrong” [[Christman2008](#bib.bibx14)]. The consensus among scholars is that robots do not yet meet the conditions on minimal personhood, and will not in the near future. However, this consensus is inconclusive, and has been used to argue that robot rights might be necessary to protect machines that operate below the level of human performance [[Darling2012](#bib.bibx17), [Darling2015](#bib.bibx18)]. For instance, in 2017 San Francisco lawmakers implemented restrictions on “autonomous delivery services on sidewalks and public right-of-ways,” citing safety and pedestrian priority of use as motivating concerns [[Rodriguez2017](#bib.bibx35)]. The proposal raises a natural question of whether these robots have the right to use public spaces, and to what extent a ban on robots might infringe on those rights. These questions seem independent of more general concerns about moral agency and personhood that typically frame the rights debate. Furthermore, it is well known that service robots operating in public spaces are typically subject to bullying and abusive behavior from the crowd ([[Salvini, Laschi, and
Dario2010](#bib.bibx38), [Salvini et al.2010](#bib.bibx37), [Brscić et al.2015](#bib.bibx10)]. Protecting robots from such treatment seems necessary independent of whether they meet strict conditions for personhood.
Like many cases of moral alignment, the case of the rights for service robots to operate on public sidewalks seems to demand a standard for evaluating the performance of machines that does not turn on any imitative comparison with human agents. Delivery robots do not have nor require the intellectual and moral capacities typical of humans; to compare their operation with human performance seems at best mismatched, at worst insulting. Interpreted as a benchmark of performance, these machines operate well below the threshold where Turing’s test is relevant and the vocabulary of rights and personhood applies. In contrast to the benchmark interpretation, however, the principle of fair play suggests we look for standards of evaluation that are consistent across humans and machines. In the case of service robots, the focus of concern is on the nature of the task these robots are performing, and the standards already in use for evaluating such performances. There’s an obvious comparison between service robots and service animals that is tempting, but I think is ultimately unhelpful. Importantly, animals feel pain and can suffer, and service animals are used to support persons with disabilities who can’t otherwise access public resources. Service robots, in contrast, are used by tech companies to better service their clients, and it seems implausible that they can ’suffer’ in a morally salient way. Given the distinct nature of these roles, holding service robots to the standards of service animals seems inappropriate.
A closer analogy to the work of service robots can be found [[Chopra and White2011](#bib.bibx13)], who propose an alternative approach to robot law centered not on personhood but instead on a framework of legal agency. A legal agent is empowered to act on behalf of a principal, to whom the agent holds a fiduciary duty that contractually binds the agent to act on the principal’s interest. For instance, a lawyer or accountant operates as a legal agent in the service of their clients. In the context of agency law, an agent’s right to operate turns both on the capacities of the agent to faithfully represent the principal, and also on the nature and scope of the role being performed. The framework of agency law offers a systematic defense of robot rights which focuses legal and policy attention to the roles we want robots to play in our social spaces, and the constraints which govern operation of any agent performing these roles [[Chopra and Estradain
progress](#bib.bibx12)]. A social role analysis of robots as legal agents has clear application to the protection of service robots operating in public spaces, including delivery robots and self-driving cars. But it also has natural extensions for the regulation of robots in a wide variety of other social roles, including robots that provide services in the context of law and justice, finances, transportation, education, socio-emotional support, sex work, public relations, and security. For instance, agency law provides a straightforward path to the regulation of bots on social media that are used to influence voters and elections [[Ferrara et al.2016](#bib.bibx24)]. From this perspective, social media bots are operating on behalf of their operators in the service of specific roles (campaign promotion, electioneering, etc), and therefore fall under the same legal frameworks that already exist to evaluate the ethics and legality of these activities.
The proposal to adopt robot rights grounded in a framework of legal agency deserves an explicit elaboration outside the scope of this paper. I raise the suggestion in this context to demonstrate how the principle of fair play might be used to guide developing standards for evaluating machine performances. Recall that the principle of fair play asks that we evaluate the machine according to the same standards used to judge the performance of a human at the same task. Thus, FPTT focuses our discussion on the task-specific standards for evaluation, rather than on the details of the performance of any particular machine, or the contrasts in performance across different agential kinds. In this way, FP also suggests an expansive research agenda for classifying and detailing the types of roles we might want robots to serve in, and the constraints on evaluating the performance of any agent filling that role. For instance, what should social media bots acting as campaign representatives be allowed to say or do? This is not an engineering question about the capabilities of any machine. It is a social policy question about what machines can and cannot do in the service of their role. If we want machines to align to our standards of performance, then Turing argues that fair play must be given to the machines.
Of course, we don’t want every machine to align to our standards. If standards cannot be made consistent across humans and machines, it entails a stratification of the social order that divides humans from machines. One might worry that a social role defense of robot rights does not eliminate the stratification, but in fact imposes new social divides wherever there is a distinction in social roles, and so threatens a conception of rights grounded on the universal and inalienable rights of humanity. In this way, a social role defense might appear to be a kind of “3/5th compromise for robots”. This worry is reinforced by a review of the history of agency law, which itself develops out of a logic of slavery and indentured servitude [[Johnson Jr2016](#bib.bibx30)]. However, the social role analysis of agency law provides a way around this worry by laying out a direct path to full legal agency for robots. To say that robots serve as agents for principals does not preclude the robot from being a full legal agent, since obviously lawyers and accountants retain their personhood and agency even while acting as an agent for their principal. And of course, serving as a principal is yet another social role to perform, one with its own standards of evaluation. Making the the standards for principals explicit allows for a robot to serve as principal, first for other robots, perhaps as a manager or oversight for other robots acting on its behalf, and eventually to serve as a principal for itself, thus bridging the gap to full legal agency.
Conclusions
-----------
In this article we have reviewed some primary concerns of the value alignment literature, and shown these interests were present in the development of Turing’s test as early as [[Turing1947](#bib.bibx44)]. We argued that a widespread rejection of Turing’s test as a standard of intelligence has led scholars to overlook Turing’s call for fair play as a source of inspiration in developing machines that are value-aligned. We have proposed an alternate interpretation of Turing’s test which is inspired by Turing’s call for “fair play for machines”, and carefully distinguished this interpretation from benchmark interpretations like the Moral Turing Test. Finally we have briefly discussed how FPTT might be used to justify a defense of robot rights and sketch out a path to full agency on the basis of a social role analysis of agency law.
Acknowledgements
----------------
Thanks to conversations with Samir Chopra, Jon Lawhead, Kyle Broom, Rebecca Spizzirri, Sophia Korb, David Guthrie, Eleizer Yudkowsky, Eric Schwitzgebel, Anna Gollub, Priti Ugghley, @eripsabot, richROT, the participants in my AI and Autonomy seminars and the Humanities department at NJIT, all my HTEC students at CTY:Princeton, and everyone in the #botally and Robot Rights communities across social media, especially David Gunkel, Julie Carpenter, Roman V. Yampolskiy, Damien Patrick Williams, and Joanna Bryson. Thanks also to the organizers, participants, and tweeps at #AIES. |
2558b951-3b63-49db-a0b8-16ee3e0c9fe8 | trentmkelly/LessWrong-43k | LessWrong | Have you noticed costs of being anticipatory?
Actions after which I expect some kind of response seem to be more costly than the direct time cost they incur (for me, at least).
They also incur a decreased effectiveness of the time just afterward, due to causing me to be anticipatory – e.g. I am more likely to check up on whether I have received a response, and empirically find it generally harder to focus.
Examples of such actions
* Messaging someone and awaiting a response
* Leaving a comment or post on LessWrong or the EA forum
* Being physically near people who might call upon me or talk to me
* This one also makes “messaging someone and awaiting a response” more likely since I am more likely to need to coordinate in various ways with these people or follow up about things that were discussed in person
Ways to combat this cost
* Go physically far away from people – e.g. plane, train, Airbnb, hotel, etc.
* This seems to be a big part of why I am so productive on planes
* Make a habit of asking “how important is this message, really?” before sending messages
* If working in a space with others near, create a norm that only you can go into your working space, or that others should message you if they want to enter?
I am curious who else has or hasn't noticed this kind of cost, and whether anyone has ideas for combatting it (my guess is that the policies I am currently operating under don't respect this cost enough -- e.g. I made 3-5x more progress toward my most important goal while on a plane yesterday than while not on a plane today, and it seems like if I had better mechanisms for appreciating the costs of being anticipatory, this difference could have been at least 10% smaller). |
f487d39f-70be-4c2a-9e7e-5212a52f3390 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | What is tool AI?
A *[tool AI](https://www.lesswrong.com/tag/tool-ai)* is a type of [artificial general intelligence](/?state=2374&question=What%20is%20artificial%20general%20intelligence%20(AGI)%20and%20what%20will%20it%20look%20like%3F) that is limited in its range of action and can only function as an assistant to human use. One kind of system people often have in mind when talking about tool AI is [one which generates information](/?state=8AEV&question=Why%20can't%20we%20just%20build%20an%20%22oracle%20AI%22%20whose%20only%20goal%20is%20to%20answer%20questions%3F) and then displays it in a user-friendly way, allowing the user to decide how to use it or whether to discard it entirely. It is contrasted with an *[agent](/?state=5632&question=What%20is%20an%20agent%3F)*, which takes actions in order to [maximize a utility function](https://www.youtube.com/watch?v=8AvIErXFoH8).
Tool AIs were [suggested](https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si) as a safer alternative to agents. However, they are not necessarily safer, for a number of reasons. For example, the choices they offer may ultimately prove beyond human capacity to evaluate carefully, thereby reducing human oversight to a rubber stamp.
Furthermore, tool AIs [could](https://gwern.net/tool-ai) [develop](https://www.lesswrong.com/posts/sizjfDgCgAsuLJQmm/reply-to-holden-on-tool-ai) [into](https://www.lesswrong.com/posts/rHSuu2X9ca8FR4thH/tools-want-to-become-agents) agents for a number of reasons, including:
1. intentionally, since agents are more economically competitive. For example, requiring a human to make choices will critically slow down the system.
1. inherently, since navigating a complex domain will put pressure on a system to become a general optimizer. If the world is complex, the system will be able to carry out its function more effectively through becoming more agent-like. Thus it is likely to produce a [mesa-optimizer](/?state=8160&question=What%20are%20%22mesa-optimizers%22%3F) as part of its development.
[Eric Drexler](https://en.wikipedia.org/wiki/K._Eric_Drexler)’s [comprehensive AI services](https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/) (CAIS) is a theory about how many disparate tool AI systems could interact, jointly achieving many of the goals of AGI, but with none being agents in their own right.
|
dad9b978-39bd-4008-8430-37a1eb0d0058 | trentmkelly/LessWrong-43k | LessWrong | Pragmatic Cutoffs
[Just an Idea I’ve been thinking about, I would love some feedback on improving it, or any relevant studies over this I don’t know about]
> "I cannot remember the books I've read any more than the meals I have eaten; even so, they have made me.” —Ralph Waldo Emerson
Sometimes I think social problems don’t matter as much, especially if I am not in direct control to change them. I can disagree with a politician, but if I don’t live in their district I’m going to usually jump to “I don’t care what they do because I have no power to enforce their actions”. Social Norming around common values / pointing out those who disagree with our values aside, I think most news day to day is a distraction. It matters only because people care about it, but if nobody cared about it then some obvious flaws in society would go unnoticed. We do need to catch false negatives but there are also many false positives that our attention latches onto and doesn’t seem to let go. Most events seem to be memory holed as well, I can usually ignore news because the issue de jour will be forgotten in a week, by almost everybody. But at the same time seeing some temporary news may be beneficial for a better model of long term trends of the world. But that loops back to if I can’t use any of that new information pragmatically, it almost becomes worthless knowledge. How do I better filter useless from useful? I understand I can’t truly decide what might be worthless and what might not be. Random viruses pop all the time, covid was known about in late 2019, few cared until it hit a certain threshold of importance.
So I wonder at what point my threshold is set so that something becomes “useful”, as in I can use it to strengthen social ties, collect relevant world information, or actually use the knowledge for a project. This seems like it can be solved by Bayes updating, but since I have no clue how I usually weight my incoming stimulus, I’m lost for understanding.
I had recently read a page about |
aea29f07-26aa-40f7-b0ab-a6fa44f81ac9 | trentmkelly/LessWrong-43k | LessWrong | Aligned AI via monitoring objectives in AutoGPT-like systems
Thanks to Arun Jose, Joseph Bloom, and Johannes Treutlein for feedback/discussions.
Introduction
The release of AutoGPT prompted discussions related to the potential of such systems to turn non-agentic LLMs into agentic systems that pursue goals, along with the dangers that could follow. The relation of such systems to the alignment problem has also been explored.
In this short post, we investigate a threat model that comes from AutoGPT-like systems pursuing unaligned objectives and explore the potential for alignment via oversight. We briefly consider some key properties of such systems, and then discuss the idea that these systems’ high-level cognition might be interpretable by default and so might allow for sufficient oversight to ensure the system is aligned. Finally, we consider a couple of reasons for why the high-level cognition might be obscured from oversight and highlight ways of preventing this obscuration.
Background
AutoGPT-like systems, or Scaffolded LLMs, are systems that:
> wrap a programmatic scaffold around an LLM core and chain together a number of individual LLM calls to achieve some larger and more complex task than can be accomplished in a single prompt.[1]
This programmatic scaffold allows for information from the inputs/outputs of LLM calls to be stored in long-term memory as well as used as inputs to tools/plugins.
We will state a few assumptions we’ll make about these AutoGPT-like systems. See this post for a similar context.
* For the purposes of this post, we assume that a single LLM call is incapable of successfully pursuing objectives. That is, the threat comes from the LLM being used within the scaffolded system.
* We assume that LLMs in the system must generate prompts/instructions for other versions of itself (factored cognition/bureaucratic approach)
* When information is passed from one LLM call/module/plugin to another, we say that this information is passed through an “information channel”. In particular, an oversee |
d70773e7-3ce5-4bf8-bcb4-57acff70d085 | trentmkelly/LessWrong-43k | LessWrong | Meme or Die: Modern Societies are Dependent on Emotionally Rich Memes to Rapidly Evolve
Please Note: I love poetry, and my modus operandi is to write in a generally colorful, and an emotionally persuasive way. This was written with you in mind. Please be charitable.
I believe that all societies, but especially those that can be modeled by a well-connected network, with edges with fast transmission speeds, are especially sensitive, and dependent on powerful signals. I define a powerful signal in this context as one that quickly propagates through the network.
Unlike genes that transmit information at the slow speed generally determined by average/median lifespan, memes over the internet transmit information at nearly the speed of light.
The LessWrong and greater intellectual community distrusts memes for good reason. They are typically characterized by their emotional power, irony rich nature, and their simplifying effects on good reasoning, and nuance. Unfortunately, I believe that it is these exact characteristics that make them powerful signals.
I’ve struggled for a long time with the ironic and contradictory nature of memes. While most people can agree that memes typically contain some truth, few believe that they often contain all of the truth. I agree.
Nevertheless, I believe that by mostly studying memes for their emotional context and content, we make it far too easy to ignore or undervalue their truth context and content. Ironically, I believe this cognitive move is to make the same error we criticize memes for exhibiting. In a network as dense as our global society, and as dense as the internet, we severely limit our ability to spread wide-reaching messages.
Now, this wouldn’t concern me if a few things didn’t all seem likely to be true.
1. Gen-Z is the most digitally connected subset of the population
2. Gen-Z creates the most powerful memes
3. Gen-Z has massive distrust in our political system
4. Gen-Z has low voter turnout
5. Gen-Z has a massive distrust of the economically prosperous
6. Gen-Z has a smaller but still s |
17194654-6dfd-4021-aef3-eb7d35d7946c | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | What do we know about Mustafa Suleyman's position on AI Safety?
Mustafa Suleyman is influential as a former founder of Deepmind and CEO of Inflection.AI which some have suggested should be considered a [major lab](https://www.lesswrong.com/posts/Wc5BYFfzuLzepQjCq/inflection-ai-is-a-major-agi-lab#Inflection_doesn_t_seem_to_acknowledge_existential_risks_or_have_a_sizable_safety_team). |
613f6941-c2b4-41b9-b16b-c1bafad70acf | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | DeepMind x UCL RL Lecture Series - Exploration & Control [2/13]
hi and welcome to this second lecture in
this course on reinforcement learning
my name is adolf and hussels and today i
will be talking to you about exploration
and exploitation
now before i going to tell you what i
mean with these terms um i'm quickly
going to flash up a slide with some
background material i highly recommend
you read chapter 2 from the book by rich
sutton and andy barto on reinforcement
burlingame introduction
and in addition if you're particularly
interested in the material that i
discussed today there's some further
reading on this slide which goes into
much more depth which won't be necessary
to understand the rest of this course
but it's a really interesting material
if you want to go into more depth
specifically in the topics of this
lecture
and this is the bandits algo bandit
algorithms book by tora latimore and
javascipiari which just came out last
year
and in addition i put a reference here
to the classic paper by peter auer and
casa bianchi and fisher from from 2002
in which they
describe the ucb algorithm which i'll be
talking about later today
so first to recap in the previous
lecture we talked about
how reinforcement learning relates to
the problem of artificial intelligence
and what we mean with the term
reinforcement learning
and in particular we put up this
interaction loop which we see on the
slide where there's an agent which takes
actions and these agents
sorry these actions somehow might
influence the environment around the
agents you can think of the agent as
being part of the environment think of a
robot in a world walking around where
the walking around might impact the
environment around it or maybe the robot
sometimes picks up something and this
changes where that object then is
and therefore the state of the
environment is somewhat changed
now the agent also observes the
environment it gets observations as an
input and these observations obviously
depend on the environment state but they
might not fully describe the state of
the environment because the state of the
environment might be this huge
complicated thing and the agent only has
a very small
interaction with this the agent might
have a camera with which it perceives
part of the environment but it can't
perceive everything at the same time
and then reinforcement learning can be
thought of as being a science of how
could this agent then learn to make
decisions how could it pick its actions
in such a way that the agent is happy
and we're going to define happy hair in
terms of reward
where there's some sort of a reward
signal that the agent is trying to
optimize this is what defines the goal
of the agent without the reward the
agent is in some sense an aimless
machine it doesn't have a goal but the
reward function defines what it's trying
to optimize
this reward can sometimes be considered
part of the environment where it gets
sent to the agent as part of the
observation in the other cases it's more
natural to define it as part of the
agent where the agent has an internal
preference function that means it likes
certain observations better than others
that's not too important for now we'll
get back to that later but the important
thing is that there is some sort of
reward function that the agent is trying
to optimize
the agent itself might contain a policy
or it must contain a policy because you
have to select these actions in some
manner
it might also contain a value function
which is a prediction of the future
rewards the agent might receive and it
might also contain a model about the
environment we discussed all of this in
the previous lecture
now this general reinforcement turning
problem requires us to take into account
time and consequences
i mean with this that actions can only
impact the future not the past
and sometimes this can happen
in
complicated ways so you might take an
action that changes the sales the
environment in some way that for
instance influences later rewards or
later observations and sometimes you
can't immediately see the effect of the
action or at least not the full
consequences of it but still the agent
in order to pick the right actions
should somehow be able to reason about
all of this
so these actions the decisions you make
which action to pick affect the
immediate reward they might affect the
state of the agent itself and they might
affect the state of the environment in
some way or the other which might itself
then have uh consequences for later
observations and rewards
now this is a challenging setting and
one interesting property that we're
going to talk about a lot in this
lecture in a sense
is that it's an active learning
setting and this means that these
actions they don't just change the
reward right they don't just tell you
how well you're doing but they also
influence the data that you see and this
is going to be important and this is
different from some other types of
machine learning
because normally in other types of
machine learning we might assume that a
data set is given to us and we want to
answer some questions given that data
set we want to learn something from the
data set but the data is given and is
not under our control in reinforcement
learning in the full setting this is
different the data is in some sense
under our control but in some indirect
sense we can take actions to change the
data we can't deliberately pick out
certain data points or sometimes maybe
we can but we can't maybe always do that
and this means that the
agent has to actively seek out
information that it can use to learn
from
now in this lecture we're going to
simplify the setting quite a great deal
and in particular we're going to assume
that the environment actually only has
one single state it could ever be in or
maybe equivalently you can think of the
environment as not having no states
whatsoever
so the environment
all that it then returns to you is
rewards it doesn't return any
observations
i'm going to assume here that the
environment does indeed
return the rewards although again you
could also think of this as being kind
of internal to the agent where maybe the
environment could be thought of as
sending you a signal that tells you
which which reward you can then apply to
it with your preferences your internal
preferences but just think of it as an
environment with a single state
and this has some consequences first of
all
actions can no longer have long-term
consequences in the environment if you
take an action it can't change the state
of the environment because the
environment only has one single state
in addition
the actions can still impact the
immediate reward which is actually not a
function of the
sequence it's a function of
just the immediate action that you take
but the other observations can be
ignored so if you consider the reward
and observation you still want to pay
attention to that because this defines
your goal
but other observations are basically
irrelevant because all they could ever
tell you is something about the
environment state but if the environment
only has one state maybe this can just
be ignored
and then we will discuss how to learn a
policy which is a mapping from your
internal agent states to actions in this
more simple setting it turns out there's
already like a lot of rich things that
we can say in this setting and a lot of
really interesting questions that we can
get into which is why we consider the
simpler setting first
then in subsequent lectures so not today
but in subsequent lectures we will be
talking about the
more general setting again
okay so now i'm going to
just go through a little example
and for that i'm going to switch
to the blackboard
so now
let's consider that we're going to
compare two different
actions
and as mentioned these actions won't
have long-term consequences so basically
we can take an action
it might be action a or it might be
action b
and we're going to compare how well
these do in some sense
for instance you can think of action a
and b you could think of them as being
medical treatments where you maybe have
the choice between say one vaccine and
or a different one
but maybe you can only do one at a time
or they could be more extensive medical
treatments where clearly you can't do
them both at the same time perhaps
and therefore you have to pick one or
the other or maybe action a is doing no
treatment and action b is doing some
treatment
or alternatively if you prefer this as
an example you could think of a
recommender system
where you have a streaming service and
people can watch movies and you have a
choice which movie to recommend next in
the top position
so picking a movie there picking a good
one might mean that people pick it and
then watch it where picking at that one
might mean that nobody clicks it for
every user you don't have this option
whether to pick movie a or movie b
and success would be if the user then
picks to watch that movie
for simplicity let's assume the only two
options here are failure or success so
let's say the rewards are basically plus
one or my uh or let's say plus one or
zero
now
we could have some prior information but
for simplicity let's assume we know
nothing so we kind of just randomly have
to pick between these two options the
very first time we try them so we have
trials here on the x-axis
axis
and at first let's say we pick action a
and we observe a reward of zero this was
not a success
we for instance show the movie and the
user didn't pick it
okay so next user comes around we have
this choice again let's say now because
action a didn't quite work out let's say
we pick action b
and let's say that this one's more
successful we get a plus one
so the question now is what should we do
next well
maybe here it seems sensible given that
we have no additional information
we only have one failure for action a
one success for action b so maybe we'll
pick action b again
now let's say that this time around
it didn't work out that well and it was
not a success the user didn't pick the
the movie or the medical treatment
didn't quite work out as well as we
hoped
so we consider this a reward of zero
again the question pops up next time
around what should we do well the
average value for action b is still a
little bit larger than the average value
for action a so maybe we'll try it again
but let's say we are unlucky again we
pick movie b and again it doesn't get
picked
so the question here is what to do next
when should we switch
back to action a
how long should we persist to action b
what happens if we do switch back to
action a but then we get another zero
maybe then we switch back to action b
again but for how many steps
so these are the questions that are
under consideration i won't give you an
answer right now but we'll discuss
different algorithms which will help you
pick between these two actions and they
will pick maybe in different ways and we
will discuss this at length these
different algorithms that you could
apply to this setting
okay so now let's switch back to the
slides
so
we're basically talking here about the
difference between exploration and
exploitation
and this is something that any learning
agent which actively gathers its data
must trade off
exploitation here refers to maximizing
performance based on your current
knowledge
so as in the example if we've only ever
shown two different movies and one of
them has been more successful than the
other
maybe the reasonable choice if you
really want to get the next one right
would be to pick the one with the
highest success rate so far
but we know in the long run this might
not be this might not be great because
if you exploit all the time you're never
going to get new information about
movies that you haven't tried very often
in the example we only ever picked movie
a once and even though that wasn't a
success maybe we want to pick it more
often just to be sure
how often is it actually a good choice
and this is called exploration and the
main purpose for exploration is to
increase our knowledge by exposing
ourselves to new
information to new data
and this is a fundamental tradeoff that
we need to think about clearly and
carefully and we went to this simpler
setting where the environment only has
one state to be able to reason this
through
basic as far as we can
and then from here
we hope to learn some lessons that we
can apply also to the more general case
so
basically in general we would need to
gather information to make the best
overall decisions
and the best long-term strategy might
involve taking some short-term
sacrifices sometimes you want to try
something that you've never tried before
which you think is maybe not that likely
to be great but you have high
uncertainty about it
if you never try it you will never know
and there might be some brilliant
rewards to be gathered but if you never
try it
um you will never find those rewards
maybe sometimes you may have to make a
short walk through the rain to find a
brilliant new place to go to
so now we're going to move towards
formalizing the problem
so what this setting is sometimes called
is a multi-armed bandit
and this is basically a set of
distributions where we have a reward
distribution per action
why is this called a multi-arm bandit
this actually alludes to slot machines
you can think of a slot machine uh
as being
something where you basically don't have
a lot of choice some on some complicated
slot machines of course you do have a
lot of choices but let's think of a very
simple slot machine where all you have
is a lever and you can pull it and you
either get some reward or you don't or
maybe there's different types of rewards
that you can get there could be some
complicated reward distribution
now we can think of this as being a
single action a single slot machine but
then you could think of having many slot
machines each of which have their own
lever and you could pull them each of
them and each of them might have a
different reward distribution
now we know a priority that typical slot
machines will give you less money than
you put into them but let's assume that
we don't know this we can generalize
this problem so there could be some slot
machines that are quite worthwhile where
others are not
now this is where this term multi-arm
bandwidth comes from because a slot
machine is sometimes called a one-armed
bandit because it takes your money away
and then a multi-arm bandit is a setting
where we don't have just one choice we
don't have just one slot machine with a
single lever but we have a whole row of
these slot machines so there's many
different choices we can make
this is just terminology of course we're
not talking about slot machines here
we're talking about a more generic
setting where this could be a
recommender system or a way to formalize
picking between a host of different
policies
say
for determining medical treatments as i
mentioned or other examples you could
come up with
this curly a is just a set of known
actions or arms in multi-unbanded
terminology
and r curly r of a is then the
distribution for that specific action
and as i mentioned for different actions
the distribution could be different
the goal is now to pick the action that
gives you the highest average reward
so each time step you're going to select
one of these actions and you're going to
observe this reward which might be a
random variable
and the goal is then to maximize the
rewards over time
this is a little bit different from what
we talked about before note that this is
actually taking into account the full
learning process this turns out to be a
useful thing to think about so we want
an algorithm a learning algorithm that
trades off exploration and exploitation
in such a way that if you apply this
algorithm throughout your life that you
get reasonably high reward of course
this reward can't be optimal all of the
time because you don't know what the
optimal action is yet
but this way of writing down the goal
including all the steps that you ever
take allows us to reason about which
learning algorithms are more effective
than others
and we will specifically optimize this
cumulative reward of your lifetime by
learning a policy which in this case is
just some distribution on the set of
actions
we could think about deterministic
policies which always take the same
action but during learning it's also
sometimes useful to have stochastic
policies which pick actions according to
some random distribution and the policy
will change over time as we gather more
data
now one
good
concept to still use would be the notion
of an action value which we also talked
about last lecture which in this case is
quite simple is this is just the
expected reward given that you've taken
that action so the action value for
action a is the expected reward for
action a
now we can also talk about the optimal
value the optimal value that you can
achieve at any time step is simply the
maximization of the action values over
actions this is the maximum expected
reward
another useful concept that we will talk
about extensively in this lecture is the
notion of regret
what is regret
the regret is a specific number for each
action which is basically the difference
between the maximum possible expected
reward you could have gotten
and the one that you did get for this
action
note that this is not a random quantity
we're subtracting one expectation uh
from the maximization of our
expectations so this is just a number
for each action now we don't know this
number that's the whole point if we
would know this number we would simply
select the action which has zero regret
note there's always one action the
optimal action for which this quantity
is zero
by definition
and note also that for all the other
actions this is a positive quantity so
regret enumerates essentially
how bad we're doing the more regret you
get the worse you're doing if your
regret is zero or close to zero you're
doing quite well
this will become a lot more clear when i
talk about specific algorithms and i
talk about the regret that they incur
and then you'll see that this is a
useful way to look at the differences
between different algorithms just to
look how much regret do they attain over
their lifetime
so the regret as i mentioned for the
optimal action is zero and that's going
to be important
and then we can think about minimizing
the total regret as our overall goal
this is a random quantity because it
depends now on the actions that we take
note that you regret the instantaneous
regret for a given action is not a
random quantity but here the actions
themselves are random quantities because
our policy might be random and in fact
will be random because typically our
policy will depend on our history so we
take some action even if we do that
deterministically we get some random
reward and we want to use that reward
somehow to pick the next action which
means that the next action will then be
a random quantity
so this is simply a summation over time
of this instantaneous regret at every
time step for the action that we picked
and then we want to want to talk reason
about minimizing this total summation
over all of our lifetime up to some
time step t
of this of this cumulative regret
note that i didn't actually change the
goal here i mentioned two different
goals minimize total regrets or maximize
cumulative reward these are actually the
same goal there's just a different way
to write the same thing
so note this is true because if we
maximize the expected cumulative reward
this would correspond exactly
to
minimizing these gaps between the
optimal expected value and the one for
the action that you've selected so these
are really the same goal we're just
writing it down differently and this
notion of regret is quite a common one
in multi-armed bandits because it allows
us to reason about how quickly does this
regret grow over time
and turns out this growth rate is very
indicative of how good an algorithm is
so we basically go to algorithms that
for which the regret doesn't grow too
quickly over the full lifetime of
learning
so what follows the rest of the lecture
essentially i will be talking about
different algorithms and also different
analysis that we can do like theoretical
analysis on these different algorithms
to discuss whether like to determine
essentially whether one algorithm is
better than another
okay so let's get
we will discuss
the greedy
policy an epsilon greedy strategy which
i'll explain what that is
the ucb algorithm
function sampling and policy gradients
and i will explain what each of these
means in what follows this is just a
quick overview if you don't know what
these terms mean that's perfectly okay
because i'm going to explain that in a
second
the first three of these approaches use
action value estimates
so we are going to keep track of some
big q of t i use big q there because
it's a random quantity because it
depends on your past observations
and this big q of uh of a at some time
sub t is supposed to approximate the
actual true value of that action a
so little q a here is a true expected
reward given a and big q is an
approximation thereof
so let's very briefly discuss how to
learn as action values which won't be
very surprising so the true action value
we know is the expected reward so one
thing we could do is we could just
average the rewards for that action
that's written on this slide maybe a
little bit verbosely
but we're basically just picking out
with an indicator function
the time steps on which we selected this
action a
this indicator function of a n equals a
will be 1 if its argument is true so
when we've actually selected action a
and it will be zero otherwise
so we're basically just summer overall
summing over all time steps picking out
the rewards that correspond to the
action a and then dividing by the number
of times we've selected action a and
this is called the count
so we're dividing simply by the count
for that action which is just the number
of times you've selected an action
including on time step t
this is just a way to write down the
average so we're simply just tracking
for each action whenever you select that
action you just average in that reward
into your estimate
now written like this you might have to
keep track of all of the rewards and put
them into like different tables of
course you could also do this
incrementally which is written down on
this slide where we simply take our
average and we update it
so this can be written in the way
that it's written on the slide where we
take our action value at time step t
and we define this to be the action
value of the previous time step t minus
one
plus some learning rates alpha or step
size parameter alpha times an error term
and the error term is simply the reward
that you've received given that you've
taken this action
minus the average for that action so far
and all of the other actions we simply
don't update them because we haven't
taken those actions at its this time
step so we we don't update the mean
and then we also increment the count for
the action that we've taken and we
define the step size parameter to be one
over the count then this is just a
different way to write down the same
average we had on the previous slide
in uh later
lectures mostly we will also consider
other step sizes which are sometimes
useful for instance you could think of a
constant step size which would lead to
tracking behavior and this can be useful
when
for instance the problem is
non-stationary in some way
then you might want to track the rewards
rather than to average them flat out
for now this is just uh a way to average
the the reward so that's what i want you
to keep in mind for each of these
actions we have the average reward for
that action so far
okay so now we can dive in and talk
about concrete algorithms the first of
which will be the greedy algorithm which
is a particularly simple algorithm
it simply takes the action with the
highest value as you've estimated it so
far
so equivalently we can write this down a
little bit differently what we often use
is a notation of pi to the policy and
pi of action a is the probability of
selecting that action which in this case
is just an indicator again which checks
that the action is the one that
maximizes the value
the way it's written down here is
assuming no ties are possible because
otherwise
the probabilities don't sum to one so if
multiple actions have the same
maximizing value then obviously you
still have to decide what to do and for
instance you could pick random you could
break ties randomly in that case
okay
now i'm going to go back to the
blackboard again
and this is just to
think about the regret of the greedy
policy
so going back to the example we had
before
we've taken action a once we've got a
reward of zero we've taken action aa by
three times by now we got a reward of
one and then zero and then zero again
now what are the action values at this
point well the action value as estimated
at time step
four
of action a
is simply one uh sorry zero
we've taken it once it wasn't a success
we have zero
the action value at time step four for
action b on the other hand
is one third
we've taken it three times
we got a plus one and a zero and a zero
the greedy policy
would continue to select action b
because the value is positive
whereas the value of zero a is to simply
zero
but note that it will actually continue
to do so indefinitely if the only
possible rewards here as i stated before
are zero or plus one
then the algorithm from this point
onwards would never ever select action a
again
now it could very well be the case that
action a is actually possible the
optimal action in this this case
it could for instance be that the
probability of getting a plus one reward
given action a
could have been something reasonably
large like 0.8
whereas the probability of selecting of
getting a plus one reward when selecting
action b
could be moderately low like 0.2 or
something
that would mean that the
for action a
is zero
you get no regret when selecting action
a
any regret for action b
is 0.6
recall the regret for an action is
simply the maximal attainable expected
reward minus the value
for taking that action
the value for taking an action here is
exactly the same as the probability of
getting plus one because the only two
alternatives are plus one and zero
so the maximum value v star
here in this example as i wrote it on
the side would be
0.8
whereas the value for
b
these are essential values it's just 0.2
this
probability being equal to the value is
just a happenstance of course it's just
because our only options are plus one
and zero
in general um you can just think of the
expected values rather than the
probabilities
so clearly we should be taking if this
is all the case we should be taking
action a but the greedy policy won't
necessarily take action a it could have
happened of course that on the very
first time we took action a we did get a
plus one in which case the greedy policy
might have stuck to action a depending
on the rest of the data and then
everything might have been okay
um
or actually you should say depending
also in your initialization of the
action values because whether we select
action b ever
depends whether we initialize this value
low or high
so this is just to say that the greedy
policy is not great which is probably
something that's quite intuitive as well
so we'll go back to the slides now
okay so
the greedy policy might not be that
great in some cases so can we do better
and turns out we can
and one simple alternative would be
epsilon greedy so the problem with the
greedy policy just to reiterate what we
showed in the example is that it can get
stuck selecting the wrong action
indefinitely there was this one action
which happens to get a higher value than
the other one and then it just keeps on
selecting that action because the value
kept on being positive whereas the other
value estimate was zero even though the
value estimate with the estimate of zero
was actually optimal action
this means that your regret can continue
to grow linearly and every time you get
a similar amount of regret and that
turns out not is not great
so the reason that this happens is
because the greedy policy doesn't
explore enough it keeps on exploiting
the knowledge that it thinks it has but
it doesn't realize or doesn't reason
about uncertainty about the value
estimates there was this one action
which we only selected a single time so
we should have some very high
uncertainty about having a valid
accurate
estimate of its value
so an alternative is to add some noise
that keeps on exploring and this is what
epsilon greedy does and this is a very
popular algorithm actually
and the way this works is that with
probability one minus epsilon you select
the greedy action
similar to greedy but with probability
epsilon you select a random action any
action including potentially degree one
but with just random probability
we can write this out equivalently as a
closed form
policy so the probability of selecting
an action and depends on whether the
action is a greedy one
again here i'm assuming that there are
no ties to be broken otherwise you have
to carefully correct for that
but if there are no ties possible then
the greedy action will be selected with
probability 1 minus epsilon plus a
little bit of probability it gets
selected when you pick the random action
and then all other actions are selected
with epsilon divided by the number of
actions because we're picking randomly
so epsilon greedy continues to explore
with this probability epsilon but that
means that it also has linear expected
regret
because the epsilon doesn't decrease so
you keep on picking actions even if at
some point you should be quite certain
that these actions are really actually
quite bad
so you've attained in some sense enough
information about them
and you should stop really stop
selecting them episode greedy doesn't do
that it keeps on selecting these actions
that said it is a very popular algorithm
in the first lecture i showed you an
example of an agent playing atari games
this agent was actually trained with
epsilon greedy exploration
now moving yet to another example of a
way to
update our policies we're going to
sidestep
action values for a second and we're
going to ask the question can we learn
this policy directly instead of learning
values
well it turns out this is possible
and before going there i'm going to make
this a bit more concrete let's consider
that we have some action preferences and
let's also pick a policy in this case
we're going to pick a differential
policy one that we can take a gradient
off so we're not going to pick epsilon
greedy instead we're going to pick this
alternative which is a softmax
and the way a softmax policy works this
is another very popular policy is that
you exponentiate
these preferences and then you just
normalize so we see an exponentiated
preference for action a divided by the
sum of all of the exponentiated
preferences for the other actions
by exponentiating we get a positive
number so we know that all of these
probabilities
must be
positive and we know because we're
normalizing with the sum that they must
sum to one so this is a well-defined
policy on these actions
and then we just select an action
according to this policy
note that the preferences themselves are
not supposed to be action values per se
right we could plug in action values
into a softmix policy for sure but now
we're considering them just to be
learnable policy parameters and we're
going to ask the question how could we
update these preferences in such way
that the policy gets better over time
so the goal is to optimize the
preferences so that the policy becomes
very good and the question is how can we
do that and i'm going to give you one
such algorithm to do this
and the idea would be to update policy
parameters such as the expected value
increases and we can just write this
down and we can do this via gradient
ascent
so essentially what we're going to do is
we're going to consider the policy
parameters these preferences
to be something that we can just learn
and we're going to try to learn them
with gradients
so in the banded case the update would
look something like this we have
theta which are our parameters of the
policy
for instance you can think of theta as
being a vector that just contains these
action preferences for each of the
actions
more generally this can be applied to
other policy parametrizations and indeed
it turns out this algorithm is quite
easy to apply
at scale in what we call deep
reinforcement learning because for
instance theta could be the parameters
of some large deep neural network
so the algorithm here is quite easy to
extend which is one reason why it's of
interest but for simplicity now you can
consider consider theta just to be a
vector containing these action
preferences that we talked about on the
previous slide
and then what we want to do is we want
to do gradient ascent on the value of
the policy
what is the value of the policy the
value of the policy is the expected
reward given that you're following this
policy
note that the softmax policy is a
stochastic policy
so this means that it might select
different actions under different uh
probabilities so don't mistake this for
the expected reward for a given action
now the policy might be stochastic so
it's really awaited
some of those
now one problem is here that we have a
gradient of an expectation but we don't
have that expectation we don't know that
expectation typically
and that means that we can't easily
compute this gradient immediately but
turns out there's a useful trick that
we'll discuss which allows you to get a
stochastic sample for this
note also that we can't directly just
sample the expectation and then take the
gradient because the sample for this
expectation would just be a reward and
we don't know how to take the gradient
of that reward with respect to the
policy starity so we need to do a little
bit of work to make this into something
that we can sample so then we can do
stochastic gradient descent rather than
gradients ascend per se
so how can we compute this gradient
that's the question that we're getting
towards
so this is a derivation on the slide
that turns that gradient into something
we can sample and i'll step through this
carefully
and this is sometimes called the log
likelihood trick
and sometimes it's also referred to in
reinforcement literature as the
reinforced trick because
this was introduced into reinforcement
learning by ron williams in an algorithm
he calls reinforce which is actually an
acronym but i won't bother you with the
exact
uh
semantics of the acronym in this case
so
we start here
on the left hand side with the gradient
of the expectation of the reward given
that policy and what we're going to do
first is we're just going to write that
out so we're going to expand this
expected reward given the policy into
the summation over actions
and then we have the probability of
selecting each of the actions times the
expected reward given that we've taken
specifically that action and we know
that to be
our action value qa
now we know that this qa does not
actually depend on our policy parameters
because we've already pinned down the
action by now right so given action a
qa is just some number and it doesn't
depend on our policy anymore that means
that we can push the gradients inside
the summation now and it will only
affect the policy not the q values
now we're going to do a little trick
which is quite useful in general we're
going to multiply this whole quantity
with something that is basically one
it's the possible probability of
selecting the action divided by that
same probability
and the reason to do this is because
then we can write it as an expectation
again so we're going to pull this
probability of deselecting the action a
up in front again and let's push back
the division by that same probability to
the back just for notational simplicity
and then we know that we have something
that is of the form of an expectation
again we have a summation over actions
of the probability of selecting an
action and then some term that depends
on the action somehow
so let's write this back as an
expectation where this can be written as
the expectation of the random reward so
we replace this expectation of the
uh the reward with the random reward
inside the expectation again
and this gets multiplied by this kind of
weird looking ratio which is the
gradient of
the probability of selecting action a t
divided by that same probability
and now as our final step which is
actually somewhat optional but it's just
a different way to write down the same
thing and actually the much more common
way to write down the same thing
is that this whole term turns out to be
equivalent to the expectation of the
reward times the gradient of the
logarithm of your policy
this is true simply because of the chain
rule because you might recall that the
gradient of the logarithm of something
is one over that thing but then we have
to apply the chain rule so if we're
considering something that has the form
of the of the gradient of the logarithm
of some function of say x
then the gradient with respect to x of
that thing is one over the function of x
times the gradient of the function of x
so that that same thing happens here so
this this
quotient here turns out to be equal to
the gradient of the logarithm of the
probability of selecting action a
this is just equivalent right this is
just writing the same thing down
differently
so now the important thing is that we've
arrived here on the right hand side to
something that has an expectation on the
outside
and then some term on the inside which
has the gradient but that means that we
now have something that we can sample we
can just get rid of the expectation in
some sense in our updates by just
sampling the thing inside and then we
would be following the stochastic
gradient rather than the true gradients
but we know from practice that
stochastic gradient descent and
stochastic gradient ascent are quite
useful algorithms and they work quite
well
so that seems to be fine
so this is just to condense this
derivation from the previous slide into
the main uh output which is that we've
now been able to rewrite the gradient of
the expectation of the reward under the
policy parameters as the expectation of
the reward
times the gradient of the logarithm of
the policy
of taking that action 18.
so we can sample this and turn this into
a stochastic grainy descent algorithm
which means we're going to update our
policy parameters theta
again this can be thought of as a vector
containing your reference for action
preferences for instance and we're going
to add a learning rate times the reward
times the gradient of the logarithm of
the probability of selecting an action
note we're adding not subtracting
because we're doing gradient ascent
rather than gradient descent
instead of subtracting the gradient of a
loss we're adding the gradient of a
value in this case
and then we can just execute this over
and over again and then we would be
performing
stochastic gradient ascent
on our
values which means the policy should be
getting better and better over time
and we can use the sampled rewards we
don't need to make any value estimates
so this is a value-free algorithm
now we can extend this and this has been
extended to also include values and
we'll talk about that in much more
length in later lectures
now one more thing that i want to say
before we go to other algorithms first
we're going to
derive this concretely to give more of
an intuition and then i want to add one
component so
so let's consider the softmax policy
that we talked about before where h is
just some preferences for an
action and to build our intuition for
what this policy gradient algorithm is
doing let's just step through that
example where we specifically
parameterize our policy using these
action preferences
what does this update that mean well we
can consider the preference for what
just one of these actions so instead of
considering this whole vector theta
we're only going to consider one
component in that vector
for action a
how does this get updated
well we add as mentioned before some
step size alpha times the reward that
you've seen times the gradient of the
selected action a t
whether or not this is action a or not
and then turns out this gradient here
this logarithm of the probability of
selecting a t
by the preference of action a
turns out to be this quantity below
which is
one minus the probability of selecting
the action if the action is equal to the
one that you've selected and otherwise
it's just minus the probability so
writing that out in case by case basis
means that the preference for action a t
the one that you selected
gets updated by adding
learning rates times reward times one
minus probability of selecting the
action
whereas the probabil the preferences for
all the other actions get updated by
subtracting a learning rate times the
reward times the probability of
selecting that action
i encourage you to step through this
more
detailed uh basically just take the
softmax distribution plug that in into
pi here and see whether you can derive
these results on the slides as an
informative exercise to go through
okay so now that we know these updates
can we interpret them somehow well let's
plug in a specific value let's say we
saw a reward of plus one
what now happens to these preferences
well the action that we selected will
get updated
to be higher the preference for the
action that we get we selected gets
updated to be higher
because
the step size here is a positive
quantity the reward is plus one as i
just mentioned and this one minus the
probability of selecting the action is
also the positive quantity so the
preference for selecting action a t will
increase if the reward is positive if
the reward is say plus one
at the same time the preference for all
the other actions will go down a little
bit
and note also that how much they go down
depends on how likely they were to be
selected actions that were very unlikely
to be selected don't go down that much
actions that were very likely to be
selected go down more
because we saw a different action that
was quite good at this point
how much do they go up or down that
depends on your step size parameter
so
what then happens is that the preference
increased for the action that had a plus
one and if say you sometimes get a
reward of minus one exactly the opposite
happens if you get a reward of minus one
the preference for the action that
you've selected will go down and the
preference for all the other actions
will go up
but what now if there are no negative
rewards what if the only possible
possible rewards are plus one and plus
two let's say
then always whenever you select an
action its preference will go up
but turns out everything still works out
because whenever you select a different
action its preference will go up faster
because the reward will be two rather
than one which means the preference
increases faster
and indeed it turns out we're still
doing valid stochastic gradient ascent
in that case so the intuition with minus
one plus one reward that one's a little
bit more intuitive when you get a plus
one your preference increases when you
get a minus one it decreases
but if you have
only positive rewards let's say plus one
and plus two then always when you take
an action the preference will increase
but the actions with the higher average
reward will increase faster which means
that
over the long run we're still following
a valid gradient the policy still
converges to the right policy
okay
so the intuition is as as uh summarized
here the preferences for actions with
higher rewards increase more or decrease
less making them more likely to be
selected again and that makes it a valid
algorithm to learn policies
the exploration here is not very
explicit though the expiration is purely
due to the fact that the policy happens
to be stochastic
this does work often if your policy is
stochastic enough for a long enough
amount of time you'll learn enough about
your policy and it's a valid gradient
algorithm which means that it will
continue to go up the only problem is
that the gradient itself because it's a
gradient algorithm it can get stuck in a
local optimum
so there's no guarantee here that this
algorithm will converge to the right
policy
and there's no guarantee that it won't
also suffer from
linear linearly increasing regret over
time
because it could get stuck at a
sub-optimal policy similar to the greedy
policy
but it turns out to be a lot better in
practice often and it's quite a common
approach also because of this property
that i mentioned before where we can
extend this algorithm quite easily to
for instance deep neural networks
but it's good to keep in mind that this
doesn't solve the full exploration
problem just by itself
we will get back to this later because
in later lectures we will talk more
about policy gradient algorithms and how
to apply these to the full reinforcement
printing setting
one more thing to say about policy
gradient algorithms already and i will
repeat this in a later lecture as well
is that we can modify them slightly by
basically changing the mean of the
reward in some sense
and the way that works is we first know
that the probabilities sum to ones this
is a valid probability so so they must
sum to one
and what this means is that for any
number b
which is just some number
we can multiply the gradient of the
policy with this b
pull the b and the gradient outside of
the summation but then what we see is
that we simply have b times the gradient
of one
but the gradient of one is simply zero
because one is a constant and we can't
change the constant by changing theta
the total summation won't change if we
change theta it will always sum to one
that means that this this whole quantity
here on the left hand side is always
zero and it doesn't matter what b is
as long as b does not depend on your
actions because otherwise we couldn't do
this first step where we pull b out of
the summation
now why is this important well turns out
we can use that any b and subtract it
from the reward
and this can change the weight of the
algorithm works in practice for instance
instead of having a reward of plus one
and plus two we can now effectively get
rewards of plus a half minus a half
so
this is not important turns out for the
expected value because of this there
little derivation above it does not
actually change the expected direction
of the gradients but it can change the
variance of the update and this is a
specifically useful or maybe especially
useful in the full case because later on
we will use this algorithm and we will
have different states and then this
value b is not allowed to depend on your
actions but it is allowed to depend on
state that means it can co-vary a little
bit with the rewards that you might get
and might therefore reduce variance
quite quite a lot
now don't worry about recalling this
whole trick with the baselines in one go
it's good to be aware of it but i'll
repeat it later in a different lecture
as well when you go to the full
sequential case
so now we've discussed three different
algorithms
and we can see here the effect of the
baseline and it can be quite important
uh this is an example just from susan
embardo where we look at some sort of a
step size alpha which is either point
one or point four and we look at the
difference uh between the policy
gradient algorithms with and without a
baseline
and
we see that with a baseline in this
example the algorithm performed better
but we can't actually guarantee
in general that policy gradients will
will find the optimal action
but there are algorithms for which we
can guarantee that
[Music]
so i already mentioned these gradient
methods can be extended to include full
context full rl case to partial
observability
and we will discuss them again but now
we're going to move to the question what
is actually possible what is the best we
can hope for
and we're going to talk about some
algorithms that we can apply
that will get the best thing in the
bandit case
so
um
we're going to discuss this theorem
first which is a very general theorem
by lyon robbins it's somewhat all the
results
and the interesting thing about this
theorem there's a couple of things that
are interesting about this theorem but
the first one is to note that this is a
statement about any algorithm so this is
more of a statement about the problem
than it is about an algorithm
and what this problem what is sorry what
this theorem states is that in the limit
the regret will grow at least
logarithmically in the number of time
steps
so note that
log t here's the leading term that
depends on time and there's this other
quantity behind it but that quantity
does not depend on time it only depends
on the differences
in value between
the optimal action and the action that
we've taken delta a
and
a kl which is a callback library
divergence which is a measure of the
difference between the distribution of
the reward under action a and the
distribution of the reward under the
optimal action a star
if you don't know what kl divergence is
that's okay
um it's something that's measures the
difference between these distributions
and it turns out it's roughly
proportional to the
uh delta squared so a different way to
think about this is that it just bounds
essentially
the regret in a term that is logarithmic
in t and then some term that depends on
the differences in means between these
actions
now the important thing the most
important thing here is that the regret
grows logarithmically and note that this
is a lower bound so what this says is
that for any algorithm you could
possibly think of the regret will go
grow at least logarithmically in time
but that's still a whole lot better than
linear growth which we had so far greedy
epsilon greedy and ballsy gradients they
can all have linear regret so can we get
logarithmic in practice are there
algorithms that have logarithmic regret
so what we'll talk about is whether we
can prove that we can find an algorithm
for which not just this lower bound is
logarithmic but the upper bound the
worse the algorithm will do is also
logarithmic in expectation
and of course i wouldn't be mentioning
this if this wasn't the case there are
algorithms which this is true
so in order to think about this we are
going to write under regret a little bit
differently we're first recalling the
definition of this delta is simply the
maximum action value of v star
minus the action value for action a
and then we can write down the regret in
two different ways so the first one we
already sh we already saw before
where the total regret at time t so lt
can be written as a summation over time
where we count in n
um
for n is zero uh sorry for n is one all
the way up to the current time sub t and
we just count the regret that we get at
every time step this is just for
analysis we don't actually know these
regrets this is not something an
algorithm uses it's just for our
analysis that we know that our regret is
going to be equal to the summation of
all the instantaneous regress that you
get at every time step
but we can also write this down
differently and some instead of over
time we're going to sum over all of the
actions and then simply look at the
count for each of these actions up to
time t so obviously this does still
depend on t right all of these
quantities depend on t
times the regret for that action which
does not depend on t
so here we're replacing the summation
over time with the summation of actions
and quite intuitively these things are
equal
so we can also just for each action
consider how often did we take that
action and how bad was it
and we want that to be low our goal is
to have the total regret to be small
so a good algorithm will ensure small
counts
whenever the regret is large and if the
regret is small the count can be a bit
bigger
in particular
the regret for the optimal action is
zero so we don't care how big the count
will be the multiplication of the count
and the regret for the optimal action is
always zero no matter what the count is
but for all the other actions we do care
and
the algorithms that will be able to do
well in this regime
can in some sense all be considered to
implement this strategy which is called
optimism in the face of uncertainty
and this is somewhat of a container
term because you can implement that in
many different ways but the idea is
quite intuitive where whenever we're
uncertain about the value of an action
we want to be optimistic about how well
how good it could be
and then we want to basically
pick it more often if we're more
uncertain about its value so we want to
be optimistic that it could be good
and let's make that concrete with an
example so let's consider we have some
belief probabilities here these are not
reward probabilities these are the
probabilities that we
subjectively assigned to the mean of
each of these actions
so for the red action well we're pretty
certain that the red action is positive
that its expected value is positive
we're also pretty certain it's not
larger than two it might be below zero
it might be negative but it's definitely
not going to be larger than say four
it's definitely not going to be smaller
than say minus two
for the blue action we're a bit more
uncertain we think it's closer to zero
probably but it might actually be as
large as two and it might be as small as
minus two although with small
probability
the green action a3 we've barely ever
selected so we think it's not that great
in on average we expect maybe its value
somewhere close to -1 or zero
but it might be as large as four because
we've maybe only selected it once or
twice and we really don't have a good
idea
now the question is now which action
should we pick and let's step through
this as an example
if we have more uncertainty about its
value i'm arguing that maybe it's more
important to explore that action because
the mean could actually be high we're
not sure where the mean is and it could
be that it's higher than we think it
currently is
so let's now consider what we should do
well in this case the red action like
action a
a1 sorry
does look quite good so maybe not
knowing anything else we do pick action
a1
and now we're going to observe some
reward it won't be that far off because
we're quite certain and maybe the reward
will be somewhere around zero now
obviously the reward could actually be
pretty far off and the mean could be
quite certain because over time this
becomes more and more certain
irrespective of the variance of the
rewards but let's say the reward was
like a little bit less than zero and
then we update our belief of where the
actual expected value for this action is
so maybe we move it a little bit to the
left and we also make it a little bit
little bit more narrow
because we haven't just seen a slightly
lower reward we've also seen an
additional reward so we have more data
so maybe the distribution over time
becomes more and more narrow
so this is our updated belief of where
these probabilities lie what should we
do next
well maybe by now it's time to select
the action a3 which was the green action
and maybe see what we can get for that
one because we're quite uncertain maybe
we've only selected it once or twice
before and let's say that then we find
this not quite unexpected reward of
larger than one like maybe it's one and
a half or something like that and we
update its distribution a little bit
towards it
it also becomes slightly more narrow
but it's
it's still quite a bit broader than all
the other distributions so maybe next
time around you select this action again
and maybe you find out that the mean
value for it is actually larger than
that than it is for the red
more peaked distribution
so that's the whole intuition of
optimism in the phase of uncertainty
and this underpins several different
algorithms the first of which that we're
going to discuss is the ucb algorithm
so the idea behind ucb is to use
something called an upper confidence
bound
so we're going to have some notion of
uncertainty and we're going to
basically estimate the upper confidence
for each action value such that we're
pretty sure that the actual action value
qa
will be smaller or equal
to our current estimate of the action
value plus this upper confidence bound
so obviously if we pick u to be very
very large this will be true
but we don't want to pick it too large
because then it becomes meaningless so
we want to pick this large enough that
we
if we're very uncertain that we
that we're still certain that the mean
is lower than this number
but we want to also pick it small enough
that this eventually converges to the
true mean
and then
we're going to select our actions
greedily
but not really greedy with respect to
just the action value estimates q
but with respect to q plus this bound u
so essentially we're going to to
define some sort of a time dependent and
action dependent bound
that depends on the uncertainty of the
action and we're going to define it in
such a way that if we're very uncertain
about an action that we'll pick it
if we're not very uncertain about an
action so if we're quite certain about
the action the only other reason why we
might pick the action is because the
estimate itself is high
the uncertainty somewhat intuitively
should depend on the number of times the
action have been has been selected
and in particular we kind of want a
large small stroke a small count
to imply a large bound because if this
count is very small for an action then
its uncertainty must be large and
therefore we should pick it occasionally
in order to check how good it actually
is but if the count then grows and in
particular if the count becomes very
large compared to other actions then the
bounds should become small compared to
the other actions
because this means that the estimated
value is accurate
so this is the idea behind the algorithm
and what happens then if you apply this
algorithm is that each action a only
gets selected if either
its value is going to be good so if you
have a high estimate you're going to
select the action right
or
if it's bound is very large which means
that we're very uncertain about the
action
and if neither of those is true if both
the action value is quite small compared
to other actions and the uncertainty is
quite small then we're not going to
select the action and that seems indeed
to be the reasonable thing to do because
if you're quite certain that an action
has a comparatively low value then at
some point we should stop exploring it
this is what epsilon greedy does not do
this is how epsilon really fails it
keeps on selecting these actions
indiscriminately of our uncertainty
about it and and
also not looking at the actual values
whereas ucb
stops selecting certain actions
or selects them much less often when the
values become low
and the uncertainty becomes low as well
now this is the intuition but can we
somehow maybe derive this bound can we
somehow figure out what the right way is
to approach this
and can we somehow
maybe come up with a bound that is
optimal in some way or the other
so
we're going to discuss that and then i'm
going to actually derive it on the
blackboard but first we're going to do
this on the slides
and what we're going to use is something
called the concentration inequality
and this specific inequality that we're
going to use is called huffling's
inequality
and what this gives us it gives us a
bound on how
wrong our estimate will be
so in general let's consider a bunch of
random variables x1 up to xn
and these are going to be identical and
independently distributed random
variables
so we're going to use these later on for
the reward so think of x1 to xn as being
the rewards for a specific action these
are all just random variables drawn from
the same distribution and in particular
for simplicity of these of the theorem
statement let's assume that these are
all between 0 and 1. the theorem is
easily extended to more general cases
but for simplicity we're just going to
assume that all of the rewards
essentially are going to be between 0
and 1.
there will be some true mean the
expected value of x for any x
in this set and this is going to be mu
now we're going to define x
bar so x t with a bar on top as being
the average
or the sample mean for all of these
random variables that we have so far
and what hufting's inequality then
allows us to do
is to consider
how far off could we could this mean be
so consider some
quantity u
if we add that quantity u
to the mean that we've estimated so far
how likely is it that this thing even
after adding this u is still smaller
than the actual mean
and turns out we can bound this quantity
to be a small number and in particular
it will be the exponentiation of minus 2
times n where n is the number of
elements going into this average
and u squared where u is this bound that
we add
so what do we see here this is a
probability it's a number that's going
to be between 0 and 1
in particular
this is a bound so the bound is not
actually capped at 1 but of course if
this is larger than 1 then we don't care
then the probability will be bounded at
1 instead
and what we see this is typically going
to be a small number and it's going to
be smaller and smaller the larger either
n is or u is and this makes intuitive
sense the more
numbers we have in our average
the less likely we are that if we then
add an added amount that this is still
going to be smaller than the actual mean
we're quite likely to be within u of the
actual mean if we have enough numbers in
here
similarly if we pick this u to be larger
the probability also decreases which
means that even for a given number of uh
elements in our mean
if we if you consider to be far away
enough
then it becomes exceedingly unlikely
that our sample mean is that far off
that is the intuition but the statement
is as given and we're going to use this
in a second to derive ucb
and the idea is to apply this to bandit
with balance with bounded rewards so as
i mentioned so let's assume for
simplicity that the reward is between
zero and one this is not necessary we
can extend the theory to more general
cases but for simplicity we're just
going to make this assumption
and then we're just going to plug in the
banded case to into this hufftings
inequality so specifically we're going
to consider our current estimate for the
value a qa
plus some uncertainty bound ua which
we're going to want to determine what we
want that to be
and we're going to consider the
probability that this whole quantity
even though we added u is still smaller
than the actual expected value and now
we know that we can bound that
probability in this fashion
and by symmetry of the argument you can
actually flip this around
uh and you could also consider like this
is considering how far off is it in one
direction this is considering how far
off are you in the opposite direction so
instead of adding the bond you could
also subtract it and then check whether
you're still even after subtracting
whether you're still large in an amine
i'm just mentioning that for
completeness it's a useful thing to be
aware of
now
the whole idea behind ucb is to
basically pick a probability so we have
here a bound on the probability and
we're going to say well
let's pick this thing let's let's pick u
so that's so that this doesn't exceed
some sort of a number so let's pick a
probability p
and we're going to say let's pick our
bound to be equal to that probability p
now we can solve this for
the bound u
if we want this to be true this first
statement that means we have to pick u
in this following way
but now we can we so what is what does
this mean well if we pick u specifically
in this way then we know that the
probability that we're going to have
a mean that is further away from our
sample mean than this bound
is smaller than this quantity p
and now we can pick p to be small and
decrease over time so that's the whole
idea
behind the algorithm we're going to
reduce the probability that we're going
to be more than this bound
off
and specifically we can pick it to be
one over t just plug that into the bound
there and we get this bound that looks
like this
which is
the bounds going to be the square root
of the logarithm of t divided by twice
the count for that action
now i didn't tell you how to pick p
i didn't tell you why we're picking it
to be one over t and in fact there are
different choices you could make there
and i'll get back to that in a moment
but this is just an example of oh you
could pick now the probability that
you're going to be more than
this bound you wrong with your sample
estimate so what does this capture well
if we don't have a lot of samples if n
is pretty small
then this bound will be pretty large
if n grows larger this bound will go
down
in addition to that the bound grows
over time what does this mean this means
that indefinitely we're going to
continue to select each action
but maybe less and less so if it's
really not a great action
so this ensures that we keep exploring
the fact that this grows with the square
root of log of t does ensure that we all
keep exploring every action but not that
much because if the action is really not
that great at some point the uncertainty
starts growing growing growing because
of this log t term so whenever we don't
select an action slowly the bound is
creeping up but quite slowly because log
t uses slow growing function
and the square root of log t is even
slower but then when we do select the
action
the bound goes down so then the only
reason to keep on selecting that action
is if the estimate the mean is large
enough
so the ucb algorithm
then looks like this where we've now
instantiated that upper bound by
plugging in this square root of log t
divided by the count n and we've
introduced this new hyper parameter c on
the previous slide
we basically within the square root we
were dividing by two
so this is this this is the same as
picking c here to be one over square
root of two but turns out c can also be
just considered kind of like a
hyperparameter for larger c you'll
explore more for smaller c you'll
explore less
in particular if you pick c to be zero
we get the greedy algorithm back that's
not great that doesn't explore enough
and the intuition now behind this
algorithm is that if your gap is large
that means that the count will be small
because the sample average is likely to
be small so it will take
more
uh if your if your gap is really large
it will essentially take larger gaps to
span
so the the bound must be quite large for
you to be considered
so that means in practice that either
the gap will be small or your account
will be small
that's not a formal statement this is
just your intuition
but in fact turns out we can prove that
in total the problem the products of
this gap times the count
will be bounded by the logarithm of t
for all actions and that means that we
have now established an algorithm which
has logarithmic regret as an upper bound
because we know that the log that the
regret is also logarithmic as a lower
bound this means that we now know that
the algorithm will have logarithmic
regret
and this was proven by peter hour
and this was uh one of the references
that i said at the beginning of this
lecture
where he proved famously that ucb
and he had a specific c there which is
uh not that important for the proof
actually but this is the specific
statement that he proved he plugged in a
specific c there or he derived it with a
specific c
and he showed that the regret for this
was logarithmic in the way that is shown
on the slide for more details please do
refer to the paper or you could also see
the proof for that specific algorithm
but i'm also going to prove it here
which will hopefully help you establish
[Music]
maybe some techniques on how you could
come up with this so instead of just
taking ucb and proving it
i'm going to try to derive it in some
sense from first principles there will
be a couple of
steps there that are maybe in some sense
cheating because i already know the
algorithm
but
it will hopefully still be useful
if you
want you can skip this section from the
lecture as well and just go on to the
other algorithms
but it's useful to at least go through
the proof either the one that i'm going
to give you now or maybe the one in the
paper by peter hour just to get an
intuition for how these things work
okay so we'll go to the
blackboard and i will try to in some
sense derive the um ucb algorithm
which means that i'll start at what we
want and then see if we can somehow come
up with the algorithm
from first principles
there will be a couple of steps here
that are going to be a little bit
creative in the sense of i'm jumping to
maybe something that won't be
immediately obvious and indeed when of
course you come up with such an
algorithm this often involves a little
bit of trial and error let me try this
oh that doesn't quite work let me try a
different thing
and of course now i already know where
to end up and i also don't want to make
this too lengthy this segment so
sometimes i will make a step that is
maybe a little bit of a shortcut but
hopefully it will still
give you some clarity of how you could
come up with such an algorithm
so to start off let's write down the
objective we have the total regret which
as noted in the
previous slides you can write down as
the number of times you selected an
action
times the
expected regret for that action
um
sorry this is the expected
regret that we're interested in
right
and then note that the only random
quantity in the right hand side are
these counts
also note that the instantaneous regret
for the optimal action which we'll call
a star
is by definition zero so that means that
we don't care about the counts for the
optimal action
we simply don't care how often we select
it in fact selecting a more often is a
good thing because that means that our
total regret is not growing
for all the other actions
so for all actions that are not the
optimal action
we want this count to be small
note that the regret itself does not
depend on time right this is just some
number that depends on the action so
that means in order for us to have a
suitable
count we want
sorry for a suitable request we want the
count to be low and in particular we
want a multiplication of
discounts times the regret
maybe to match the
lower bound that we know from the lyre
robbins theorem
so let's assume that we don't know
exactly what we are able to get but
let's hope that we can get something
that is smaller or equal to some
constant that will fill in later because
we don't know yet what that should be
times the logarithm of t
that would be great if we can get that
so
we'll start here and then we'll see
whether we can find an x
and also to find an algorithm such that
this is true by the way the algorithm i
should note that at the beginning the
algorithm that we're going to consider
is going to be the ucb style algorithm
so we're going to pick an action
such that
it
maximizes the expected
estimated
value plus some bound
and what we're going to do here is to
see if we can come up
with what u should be can we come up
with an upper confidence bound u
such that we have logarithmic regret
now in the science i already told you
one way to
instantiate that but let's assume we
didn't know that in advance and that we
can somehow see if we can derive that
so in order to get the lie robin's lower
bound we need the regret to grow
logarithmically
or something that grows equally fast
which means we need this property
now it could be that this is already
true on some timestamps right we know
that nt so the count for action a at
time t is a random quantity and at some
time steps we could already be in
declare
so let's assume that we have some time
step in the past before time step t or
it could be time sub t itself
let's call it time step
m
which is smaller or equal to t
on which we know that the count is small
enough in particular let's say
that our count
is going to be
uh well let's just say it this equation
above already holds
well that means that we're good because
this is the
total regret up to that time step
so if
this equation above already holds we're
good to go for time set m
which means
that we
only need to consider
um
let's just write it down equivalent
sorry explicitly
if this is true
then clearly it's also smaller than
log t which is the time set that we care
about
so
in in in what follows we only need to
consider the time steps after the very
last time this was stirred so let's say
there was some time step m
let's make it a little clearer this is
an
m
and this was true that means that up to
that point in time time step m the
regret in total was logarithmic even
though at some earlier time subject
before time step m
it could have been
it could have popped above this number
we know that by the time we're at time
step m
this is true now this time set m we
don't know what it is right it might be
some number that is close to t might be
some number that is close to one but we
don't care for now what we're just going
to say is oh there's going to be some
additional time steps
and they're going to be after this time
step m
and because we're already good to go at
time set m all that we have to look at
is these time steps that are after and
we know that for these time steps it
must be the case
that the count
is larger
by
assumption that m is the last possible
time step on which
this statement above hold
held
that means that in the subsequent time
set this statement must hold
okay so we can just put that to the side
for now we're basically considering a
couple of time steps
beyond this time step m in the past
and now what we're going to do is we're
going to look at how quickly does the
regret grow
after the first times that we've
violated this nice
property that we had low enough regret
and if we can show that regret grows
slowly enough when this is the case
then we can bound the total regret
so effectively what we will be doing is
we're looking at the total expected
count for a given action for a given
time step
and we're basically noting that you can
you can write this down explicitly as
the expectation of a sum of these
indicators that action
a was picked
that's just by definition
and what we did is we basically split
this up into two parts where we say well
we have
the number until until this time step m
it doesn't matter that we don't know
what it is there is maybe sometimes that
m could be m is one or could be m is t
somewhere between 1 and t in which this
was true the fact that
your gut was low enough
plus
the remainder of the time steps where
we're just going to concern ourselves
with these intermediate time steps now
and we know that
this by definition of what m is
is going to be
smaller or equal to the
x a times the logarithm of t divided by
delta a i'm just using that
property that we had up here
for
m so there are some types of m for which
we know that this is true
and i'm just plugging that in here
and then we have the remainder as we had
up there
just going to repeat that
for completeness
and then what we're going to do we're
going to notice that this first quantity
is now no longer a random variable we've
just bounded the random variable that we
had into something that is no longer a
random variable so we can take the
expectation away
so we can push the expectation all the
way into the summation that we have
so this is going to be equal to the exp
sorry
not the expectation
this is just going to be x a which we
haven't determined yet
log t divided by
a
plus and we can push the expectation all
the way into the summation
the indicator that a n is a
and we know that this is actually
conditional so let's be explicit about
that now
on n being
the time step beyond
m
which means that we know that at these
time steps
the count
will be sufficiently large
that this is true
and then we take it from there and we
can see
what will happen but note that here we
actually have the expectation of an
indicator so we have the expectation of
an indicator of an event
the expectation of an indicator is the
same as the probability of that event
so we can write this thing inside the
summation as a probability
so now i'm going to put a this condition
like i put it there explicitly but let's
put that to the side for now this is
going to be we're going to consider a
time step
n
and an action a when this is true but
i'm not going to include it in the
notation all the time because it becomes
a bit
cumbersome so now we're going to look at
can we bound
the
probability that we're picking the
action
given that
this is true
we're just logging that to the side we
don't know yet whether it'll be useful
but we know that that's true all these
time steps we might as well keep it
around
so now we can talk about the probability
of selecting an action
note that a is a sub-optimal action
right we already took care all the way
at the beginning we took care of the
optimal action and we said well we don't
care often so we select that so we don't
need to worry about bounding the count
for the optimal action we're only
worried about bounding the counts for
the sub-optimal actions so what is the
probability of selecting an action
well in order to select an action it
must have the highest value plus
bound and in particular
we can bound the probability of
selecting the action by saying well at
the very least
the
value for that action
plus the bound for that action the
estimated value plus the boundary for
the action
must exceed
those quantities for the optimal action
i put um greater or equal here i could
have also put uh greater because then
uh
like we were strictly
certain we're picking action a but let's
just assume that ties don't really
matter that much so these values are
never exactly the same
so it doesn't really matter whether i
could greater than or greater or equal
than
though this is not intended to be very
rigorous uh
proofs is more of a proof sketch or
derivation of how you could come up with
the algorithm
now let's simplify notation a little bit
let's get rid of all these um
brackets and subscripts and stuff so
let's actually write this down
just notationally
for now as q plus u
is greater or equal to q star
plus u star
this is just a notational shortcut this
i didn't change anything here
so can we somehow bound this probability
because we know that the probability of
selecting action a is going to be
smaller or equal to this probability
because in order to select action a you
must at least have a larger value plus
bound than the optimal action otherwise
you won't select it and indeed there
might be other actions that you might
select as well but we don't even need to
worry about those turns out
now this we can write out by considering
different cases
so it might make sense to look at
these estimates so we have an estimate
for
the action a and an estimate free action
a star
and let's just pick one of those let's
pick a star and consider whether our
estimate is in some sense wrong
so it could be that our estimate for a
star is quite low it's in fact
substantially lower than maybe it should
be the case and we could consider
some quantity
so let's write down this whole thing
again
and let's condition it
on
that q
star plus u star
is um
low so we're going to say this is lower
than expected it's it's low
lower than some number y that'll fit in
in a moment
and then of course we also have to
consider the opposite case so in one
case we say well
maybe
q
star is quite low and maybe then we can
bound this probability and in the other
case we're going to say well now q star
is not particularly low
but then maybe then again we can bound
that probability because maybe then
q needs to be quite high
so the alternate case is just
using the law of total probability
right um
i didn't change anything here i'm just
writing things out i'm considering case
case by case and of course i need to
multiply both of these with the
probability of this new condition that i
put in
and this is starting to look interesting
now
because here
we have something while i'm putting down
the rest
we have something that looks
similar to huffling's inequality
we have a random quantity
q star
we have some bound u star
and we have some quantity on the right
hand side y
so maybe we can plug into huffling's
inequality here and in order to do that
y should be the expectation of the
random quantity so maybe we can pick y
to be
the actual value
of the optimal action is that a good
choice
well let's just try let's just plug it
in and see what happens
so
if we plug in
y being the actual value of q star
we have this probability
p
q star
plus u star
is smaller or equal to q
star right this is just shorthand for
the actual value
of a star
and this we know from huffington's
inequality is going to be quite small
it's going to be specifically on some
time step sorry i was at time step i was
calling it n so let's use consistently
let's use
time step n
this is going to be
just using half things inequality as
given in the
lecture slides
we know that this is true
and now we can
see that maybe we can pick this bound u
to make this probability substantially
small enough
how small do we want it to be well we
were looking eventually going back all
the way to the beginning we're looking
at the probability of selecting an
action how small should we make it well
we're looking at the probability of
selecting the action on a specific time
step
and we want the summation of that so the
summation of these probabilities over
time
which is what we have up here
we want that to be small
now if we sum a function of time over
time and we want it to be smaller than
the logarithm of t
then maybe we can bound this probability
selecting the action
with one over n is this possible if it
were possible to bound the probability
of selecting the action with one over n
where n is the time step
then the total summation of that would
be smaller than the logarithm
of n
or up to whatever action you uh put
because the um
and i'll just put it to the side and
i'll remove it in a moment we know that
it's true that if n is one to t and we
have one over n
this summation is smaller or equal to
the logarithm of
t plus one
it's actually smaller
there's a
small quantity there it's slightly
larger than logarithm of t but smaller
than logarithmic t plus one
so this will be small enough if we can
somehow make sure that this is equal to
one over n
can we make that happen
well we have full
choice over what u is so let's just
solve this
so if we have e
minus n u squared and we want that to be
equal to 1 over n
then this implies
that we want u to be equal to the square
root
of the logarithm small n divided by
big n
so this is interesting because here we
see the ucb
bound showing up
but we've derived it from this need for
the probability on every step to be
1 over n in order to get this logarithm
of t in total
so now we can go back we've kind of come
to this conclusion that maybe this is a
good bound and now we can go back to the
rest of the proof and see if we can then
prove
everything holes that we want to hold
so let's go back
slightly so we're looking at this
probability let me use a different color
now
so we're looking at this probability
of selecting the action
and now what we've established is that
this quantity
can be bounded
by
one over n
this quantity over here
can obviously be bounded by one
so the first term
in this
is small or equal to one over n
so next
we move
and i'll use a different color to the
second term
now
for this last bit here
we can't say that much because we
already saw that the inverse of it above
is the probability that's quite small so
all we can say about this one is small
or equal to one perhaps perhaps we can't
say that much more
because we know that if the opposite of
this is is not very likely then this
event might be quite likely so now what
we're hoping to do is maybe we can bound
the other side
so let's see if we can do that
so what we're trying to bounce we're
going to see whether we can bound q plus
u
being greater or equal to q star
plus u star
given that we know that q star plus u
star is not particularly small it's
going to be greater than the average
for q star
can we make sure that this is small
well
we know
that
the probability of q plus u being bigger
than q star plus u star
that's going to be smaller than the
probability of them being larger than q
star because we know we're comparing to
something
the estimated q star plus u star which
itself is bigger than q star so we can
bound this probability
by saying q plus u
is greater or equal to q star
okay that looks less complicated that's
good
it's not quite in the form where we can
apply huffington equality or anything
like it
so we need to do a little bit more work
first of all we can notice that the
inequality is in the wrong direction for
huffling's inequality
so one thing that we can do is first we
can flip the signs
the negative of a random variable is
obviously still a random variable so
this
doesn't hurt
applying huffling's inequality at all
and we also we're also missing on the
right hand side
the expected value for the quantity that
we want to uh
for the random variable we have this
random variable on the left hand side
but we don't have its expectation on the
right hand side
so let's just add and subtract that
expectation so that we have the
expectation on the right hand side
and now
notice that this
minus q star
plus q
is by definition just
minus the regret for action a
for the action a that we're considering
so this means that we can simply write
this down we can move that to the other
side of the inequality so we can say the
probability of all this
again i'm simply rewriting things here
i'm not actually changing anything
is smaller or equal to q
so now we have kind of the desired form
to be able to apply huffling's
inequality we have a random variable on
the left hand side
we have this expectation on the right
hand side
and we have something else
now there's something else looks a
little bit complicated and potentially
messy because it has this
action gap this delta a in there
so it would be nice if we were able to
instead of this have something that it
more looks like
you
so can we
maybe somehow
have the property that this is larger or
equal to 2u
if we could do that
then we could just replace this
probability we could bound the
probability on the left-hand side
with something that has this nice clean
simple form where we just have this
random variable minus q
and we add some bound u and then we
compare that uh to see whether it's
lower it's still lower or equal to oh
sorry there should be a minus q here on
the
right hand side
whether it's it's uh small or equal to
its expectation
so can we do this well now we're going
to scroll all the way up and we are
noticed that we do have we do know
something
this is something i said all the way at
the beginning we were considering only
those time steps
on which it's true that in some sense we
were saying that the count is rather
large but instead we can also interpret
this as
the gap being fairly large so let me
just repeat this equation let's call it
equation one or something
and let me
repeat that equation all the way down
and see if we can use that
so we know from equation one i'm just
going to repeat it here that n times the
delta
is going to be greater than
x a which we hadn't determined yet
times the logarithm
of the
time step n
so that means that we do know that delta
a is larger than
x a
logarithm of n divided by n
well that's interesting so that looks
very similar to our upper bound
but in fact it's the
square of it because if we go up we saw
that u was defined
as the square root of log n divided by n
for any action
um
this is the count for action a at time
step t just to be
clear so if i write it down more
explicitly we suppress that's from the
notation but that's what it means
so maybe
so let's first actually write that down
so this term here
is equal to x a
u
t a
squared
but what we wanted is this other thing
right we wanted the delta to be larger
than 2u
instead we have the delta is larger than
x times u squared
so can we somehow make these the same
well we could
for instance pick x a to be 1 over delta
a
if we pick it in that way and notice
that we can pick xa in whichever way we
want as long as we don't make it
dependent on the time set t
then we have that delta
squared is larger than
u squared
which means that delta is larger than
mu
so this is not enough oh we notice we
have a we have a problem here with a
constants but we could have picked x in
a different way so let's pick it to be 4
divided by
delta
and then we'd have
i have to make a little bit of room here
in order to be able to then change the
equation
this would be 4 and then taking the
square root on both sides we have 2u
which is the required thing the thing
that we wanted
so by picking x to be a certain value 4
divided by the action gap for that
action
we can make sure that in the time step
under consideration delta has this
property
and that means that we can go back now
to the equation up here
and we can say that this thing must be
smaller or equal to the probability of
this
random variable minus q
plus two u
minus u
being smaller or equal to the
expectation of that random variable
and then of course we can simply cancel
the 2u
and the minus u
to just have a u
now we have something that is exactly in
the form of halfting's inequality so we
can say this is smaller or equal to the
exponentiation of
the count
times u squared
and by definition of u which we've
defined above to be equal to the square
root
of the logarithm of n divided by n
we know that this is going to be equal
to 1 over
n
as desired so then we can go back up
here again
let's
pick a nice green color again we know
that we can now bound this to be one
over
n
so that means if we go up here
that we were actually able
to bound
this to be
um
not quite one over n because we had
these two different cases but we were
actually able to bound it as 2 over n
which means that the summation over time
is still logarithmic in t
so now we can plug this in and we can
say
that this total summation here
is going to be smaller or equal to two
logarithm
of t
plus one
and therefore the whole term
is logarithmic in t we had this case
first when the count was relatively
small that was covered in the first part
so maybe let me do that with a color but
that was covered in the first part and
then in the second part we did a lot
more work but we were able to bound that
as well
in something that's logarithmic in t
and we were able to do that by just
first picking some generic numbers x and
later seeing how we should fill that in
and actually let us now fill that in
here we figured out below
that we were able to do that by picking
x to be equal to four oh sorry
four
divided by delta a
so you can you can do this in different
ways you can bound this slightly
differently as well you can pick other
constants you can go through it slightly
differently and you could come up with
slightly different bounds
notice for instance that we came up with
a certain definition of u that is
slightly different from the one from
peter hour because peter r had a
factor of two there which we didn't use
you could put that there as well and you
get a similar bounds which is also
logarithmic so we don't particularly
care about these constants that much we
more care about the rate
which we were able here to establish to
be logarithmic in t for the total count
for any action
so to repeat what we've done we've
basically done a case by case analysis
here first we went to this time step and
we said well there are maybe some time
steps
on which we already have low regret
and then we said well so now we're going
to only concern ourselves with the time
steps in which the regret is not that
low and it turns out we could use that
later with this equation one
to make sure that for those time steps
we know that the gap must be relatively
large
in the sense as as stated in equation
one and we were able to use that later
on
then for those time steps in which uh
this is the case we just bounded the
probability of selecting the action and
this is useful because bounding the
probability allowed us to eventually
bound the expected count
and by bounding the expected count to be
logarithmic
because the action gap itself is not a
quantity that depends on time if we're
able to bound the counts to be
logarithmic then the total regret is
also bounded to be
logarithmic again i'm i'm going to
stress here this is not a very rigorous
proof there were some steps that could
have been made a little bit more precise
the point here was not to be very
rigorous or precise to point here was
more to show you how you could come up
with these type of algorithms perhaps
and
how in some sense the ucb bound could be
derived by thinking about huffington's
inequality and thinking about what are
the properties that we want from
this regret so you want the count to be
low we want it maybe to only be
logarithmic and then turns out if you
want to use huffing's inequality
the upper bound that is used in usb
almost pops out in some sense
okay
i'm going to
go back to the slides
now
and continue
so next
we'll go into some bayesian approaches
to bandwidths
so what is the bayesian approach in the
bayesian approach we keep track of a
distribution
and in particular we're going to keep
track of a distribution over the
expected value for each action
this is different it's good to keep in
mind many of you will be familiar with
beijing approaches but just to be clear
we're not trying to model the
distribution of the rewards
instead we're going to quantify our
insurgency about where the expected
reward is and then update that over time
as we go
so know the probability here is not on
the
reward given action a no it's over the
expected
reward or the value of action a which
under probation approach you consider to
be a random quantity
not because it's random every time you
query it but because you have
uncertainty about what its true value is
and then theta slightly overloading
notation here but here using theta now
as the
um
parameters of the distribution so for
instance
so first like to be clear this
probability should be interpreted as a
belief so for instance for some number x
the probability of q
q of a being equal to x is our belief
that that's true it doesn't actually
impact the actual probability because
this is this is just a number
but it it makes our uncertainty about
this and as an example theta could for
instance contain the means and variances
of gaussians if we want to model each of
these uncertainties with the gaussian
distribution
that's choice right we can just pick a
distribution and then
uh hopefully we pick it in such a way
that at least the true value is
supported with the gaussian for instance
this would be the case because the
guardian has full support over the real
number line
and then
this allows us to do a couple of things
that we couldn't quite do before as
easily perhaps for instance we could
already
a priori we could pick certain actions
to have a higher expected value than
others because we know something about
them we might know that oh this value
we're quite certain is between 10 and 11
let's say
or this other value we don't know where
it is it's somewhere between 1 and 100
and
if we want to model these things
explicitly we can actually inject this
as prior information before we even
start running the algorithm
but you can also use these approaches if
you don't have a lot of prior
information for instance i'll show you
an example where we assume
very little and we basically say well we
know the true value somewhere between 0
and 1 but it could be anywhere between
there it's uniformly likely to be
anywhere
so if we do this and we update these
probabilities then we can use these
beliefs to guide our explanation if we
have these complete distributions over
where we expect the values to be we can
for instance very easily pick the upper
confidence intervals
now let's go through an example to make
this a bit more concrete so for instance
consider a bandit with bernoulli reward
distributions and i mean with that that
the distribution of the reward only has
support on zero and one
so the only thing that you basically
need to figure out is how likely is the
reward to be won and how likely is to be
zero
and then for instance the prior
information could be very little so for
each action we basically say well we
don't know
it's uniformly likely to be one zero a
half a third we don't really know we're
going to assign equal belief to each of
these possibilities on
the interval between zero and one that's
just one choice of prior
and that means that either each of these
values is equally likely
even though the rewards can only be plus
one and minus one the mean could be
anywhere between there
now one way to model this is to pick
beta distribution with parameters x a
and y a
and for instance if i pick a beta
distribution with initial parameters x a
is 1 and y a is 1 then we exactly get a
uniform distribution over the interval 0
to 1.
a beta distribution is quite a common
one to be used in the combination of
bernoulli data
and then we can update the posterior for
the beta distribution this is quite
simple
whenever the reward is 0 we update the x
parameter by incrementing it with one so
for the action a
t that we selected at this time step we
just increment x
so this will be the number of times the
reward was zero
plus one
for action 80
and if the reward happened to be 1 we
increment the other number y
note that x and and y together actually
give us gives us give us the count the
number of times the action was selected
plus two because they start at one each
of them before even selecting any of the
actions
this is how that looks so if you start
here at the left bottom plot you see a
uniform distribution between zero and
one
and again this is the belief that we
have for each of these values so we
believe each of the values between zero
and one to be equally likely so you have
this
block essentially uniform probability
mass
now if we then see a reward of plus one
we can do the bayesian updates to the
beta distribution which for the beta
distribution is quite simple to
implement
and then what we see is that the
probabilities immediately change quite
dramatically
and in particular
the mass on a mean of zero becomes
basically zero and this makes sense
because we've seen a reward of plus one
now so we know that a true expected
value cannot literally be zero anymore
if the only possible rewards are zero n
plus one and we've at least seen one
plus one then we know that the true
value must be at least infinitesimally
above zero so the probability of
literally zero goes to zero immediately
after just one sample
and the probability of one essentially
doubled so now we have this triangular
shape to the distribution which assigns
more probabilities to larger numbers
given the information that we have right
now but still some substantial
probability mass as well to lower
numbers
now let's say we see a reward of plus
one again then the distribution gets
this nonlinear shape it's actually cut
off here slightly on the slide but it
goes up even farther on the uh right
hand side it becomes more and more
likely that the true mean is one so far
we've only ever seen plus one so the
most likely mean value is actually one
but other high values are also quite
likely very low values become quite
unlikely
if now after this we see a reward of
zero
and another reward of zero
then the distribution has become
symmetric again and now the most
probability mass isn't exactly a half we
know it's extremely unlikely that it's
very close to zero it's also extremely
unlikely to be very close to one the
average value and instead it's much more
likely to be somewhere here in the
middle
this is just an example to step through
how you would update these distributions
so what you'd actually do if you if you
did this for ballooning banner with a
beta distribution is you just increment
these counts right but implicitly these
counts would then imply this
distribution
you're basically updating this
distribution this is what it looks like
now if you have these distributions they
could be gaussians they could be beta
distributions but then we can
use this to explore and in particular we
could again go back to this principle of
optimism in the face of uncertainty and
one simple thing we could do is we could
look at the standard deviation of each
of these beliefs
and
pick kind of like an upper confidence
bound so we take the mean value for
instance for action
a3 here the mean is the highest of the
bunch and we add say one standard
deviation and this brings us over here
now note that the red action has a lower
mean but adding one standard deviation
might actually bring it or two standard
deviations i don't know how much it is
exactly but some number of standard
deviations would actually bring us
higher
so the upper confidence bound for the
red action here action a2 is actually
larger than that for the green action a3
but the copper upper confidence bound
for a1 which has this hugely wide
distribution turns out to be even larger
so we see here that by adding this
uncertainty
we get the same principle of optimism
under the phase of uncertainty but now
instead of using an upper confidence
bound
which stems from huffling's inequality
instead we can also just try to
approximate the full distributions and
then you use those to do the optimism in
the phase of uncertainty
so one way to think about this is that
we have yet again we have a bonus in
some sense and we could use that to pick
our action but the bonus could be
derived from these
posterior distributions which we've
updated according to bayes
and then we could still do the same
principle as in ucb so basically we
could do a ucb like algorithm but using
the
bayesian approach rather than the
bounds that we had before
this is however not the only algorithm
you could do and now i'm going to tell
you a little bit more about a different
algorithm
called thompson sampling
so thompson sampling is also an
algorithm to
solve bandits and it's a bayesian
approach
and in particular um it's going to be
related to something that we're going to
describe first which is called
probability matching
so what is probability matching
this is an algorithm that is quite
different from ucb because it's a ran
it's a random algorithm so our
probabilities of picking an action will
be
a
random quantity like our action will be
random quantity our policy will be
stochastic
and the way it works is it picks an
action according to the likelihood
according to our beliefs
that this action is the optimal action
because we have these belief
distributions now we can reason about it
so we can pick the probability of each
action to be exactly the same as the
probability that action is optimal
now this is a somewhat unintuitive thing
but it is optimistic in the face of
uncertainty because if you have a large
uncertainty about where your action will
be
then the probability of it being maximal
also goes up maybe it's not the most
likely action to be maximal but it might
be a fairly
likely action to be maximal if your
uncertainty is high
so actions with larger probabilities
are either high-valued actions or you
have a lot of uncertainty about them
similar to the ucb algorithm and other
optimized optimistic approaches
however it's a little bit of an
intuitive thing as you mentioned because
it's not immediately obvious that this
is the right probability that you should
assign to an action right it's not
immediately obvious that picking
according to the probability that you're
the optimal action is also the right
probability to use for expiration
in addition to that it can be a little
bit difficult to compute this
probability analytically
for from the posterior uh well this can
be done numerically but even keeping
track of the posteriors of course can be
a tricky thing uh in a full basin update
but if you have the posteriors then you
can compute these probabilities
potentially numerically but turns out
there's also an easier approach which is
called pumps and sampling and this is in
fact
perhaps the oldest of banded algorithms
it's already from the 1930s
and it's named after the inventor of the
algorithm and the idea is quite simple
so we are still going to keep track of
these posterior distributions so we're
going to update these via bayes rule for
instance
and then the idea is to sample first
from each of these belief distributions
an actual action value
so please
carefully think about what this means
you have your belief distribution at
time step t about where you believe the
mean value for that action to be
and then we're going to sample the
distribution which gives us an action
value and we're going to do that for
each of these actions
then we're simply going to pick the
greedy action according to the sample
action values
turns out if you do that
fonts on saturn will select the actions
according to exactly the same
probability as probability matching
would would do
and the proof here is essentially
contained on the slide
because the instance of sorry we have an
indicator function there over an event
and the event is that action a was
picked
so
the event that action a is picked means
that action a had the highest sampled
action value
but the expectation of this instance is
equal to the probability
of that happening
so it turns out if you do
sorry thompson sampling if you first
sample the action values and then you
pick greedily this is exactly the same
as trying to just just computing these
whole probabilities and then selecting
your action according to that
so it's kind of an interesting simple
shortcut that if you have these
posterior distributions you can simply
sample values from these and then pick
greedy
so in some sense thomson sampling is
simply a technique that allows you to go
from these bayesian basin
probability distributions these
posterior distributions
to a policy
interestingly
thompson sampling actually achieves the
lie in robin's lower bounds for
bernoulli bandits and in some other
cases as well and therefore is optimal
in the same way that ucb is
and it has logarithmic regret
so this was actually proven not too long
ago well a couple of years ago now but
this wasn't immediately obvious to
people but thompson sampling is now
considered
similar to ucb one of the optimal
algorithms why did i tell you about
multiple algorithms and i'll tell you
about one more approach in a second so
it's good to mention this
that
not all of these approaches are quite as
easy to scale to the full reinforcement
training case so this is why we're going
to go through a couple of these
different algorithms
and then like we'll have
a lot of tools in our toolbox which we
can then see which ones we can apply
best at scale and for instance
thompson's happening has been applied to
the full reinforcement running setting
and also to the function approximation
case in which the observations coming at
you are just messy observations uh for
instance from a camera and you're
somewhere going to have to deal with it
you can't do this exactly then perhaps
but some of these approaches are easier
to estimate than others and the jury's
still a little bit out of which approach
is the most beneficial or the most
fruitful for future
work okay now we're going to continue to
our last
set of algorithms in a sense
and we're going to consider
planning to explore
so what do i mean when i say planning to
explore well so far we've viewed bandits
as one-step decision-making problems i
mentioned all the way at the beginning
that the stage doesn't really matter
but we can actually also view them as
sequential decision-making problems but
instead of reasoning about the
environment status being the sequential
part the thing that makes it sequential
we're going to talk about internal v
agent so at each time step the agent
updates some internal states to
summarize its past this state now does
not need to contain anything about the
environment because the environment
doesn't have any in additional
information but it should contain
something about
the rewards and the action that the
agent take has taken
so each action a
can be thought of as transitioning and
to a new information state as t plus one
by adding information
with some probability
and this probability depends on
given the state and the action it is a
random quantity like the next states
because it depends on the reward that
you're receiving
so given that we've taken action a t in
state st we're going to transition to
some state st plus one
and that means that we have some sort of
a markov decision process
where the states are fully internal to
the agent
this is just whatever is internal to the
agent as i mentioned these state
transitions there are probabilities
because the rewards can be random and
also the actions can be random so if you
condition on
one action then that randomness goes
away but you could also consider the
state the state probability distribution
which will also then have the randomness
of the action
depending on your algorithm
so thompson sampling is a random
has a random policy whereas ucb has a
deterministic policy
but what does this all mean well it
means that even in bandits we can think
of actions as affecting the future after
all
but now not because they change the
internal state of the environment in
whatever way because there is no state
through the environment there's nothing
to be changed
instead
they change uh they affect the future
because of how they affect the internal
state of the agent
so to make that more concrete
considerably banded again where there's
some probability mu that you'll find get
a reward of one and there's therefore a
probability of one minus mu that you get
a reward of zero for some action a
so for instance you can think of winning
or losing a game with that probability
and we want to find the arm with the
highest mean so we want to find the
strategy say that is most likely to win
the game
then we can consider the information
state to be a tuple alpha and beta where
alpha counts the number of times the
reward was zero and b is a constant
number of times where the reward was won
recall that this is very similar to the
beta distribution that we talked about
before
where the parameters x and y of the beta
distribution were essentially alpha and
beta plus 1 each
so
this information state is fully internal
to the agents and we know exactly how it
will update whenever you see a reward
right
but which reward to receive is a random
quantity so the state transition from
this information states from one time
set to the next is a random random
occurrence
so what we've done now is we basically
formulated the bandit as an infinite
markov decision process over information
states it's infinite because alpha and
beta in this example for instance can
continue to grow indefinitely right so
we're never actually returning to
exactly the same state that we were
before
you can of course change the algorithm
the agent and some agents might actually
return to the same internal state as
they were before but in this case it
would it wouldn't wouldn't ever loop it
would continue indefinitely but it is a
well-defined markov decision process
it's just an infinitely large one and we
can still think about how to solve this
using reinforcement learning techniques
for instance we could learn a bayesian
reward distribution and then use
planning
we only need to learn the reward
distribution because everything else is
given we know how we will update the
information state
given a certain reward so if we know the
likelihood of each reward happening then
we can use that to plan into the future
and this allows us then to
um
reason about which action to take and
therefore to explore
this is known as bayes adaptive
reinforcement turning and turns out if
you do that so if you learn a bayesian
reward distribution so you're trying to
learn what the reward could be the
expected reward could be and then you
take that into accounts
and you use it in your planning
algorithm
and then you plan infinitely far into
the future
all the way until the end of time and
then you use that plan to select the
action
this turns out to optimally trade off
exploration and exploitation with
respect to your distribution
now obviously this this this can
actually be extended to full rl by also
learning a transition model but it
becomes very unwieldy very quickly
i uh already mentioned planning
indefinitely into the future
and it's a little bit unclear how to
scale this effectively that doesn't mean
that it can't be done it just means that
it's not immediately obvious how to do
that
i do want to i did want to mention it
because it might be an interesting
approach for future research
okay now we're going to go back
to one final example
okay so for our final example i'll go
back to the blackboard and we see here a
variation of the simple problem that we
saw before as well
and essentially what we're going to ask
is this question of which action are we
going to select next if we've seen this
information so far
now as i mentioned at the beginning
someone intuitively maybe you would say
well
action b seems to be the right action
but i'm going to ask you to be a bit
more specific and i want you to think
about this so what i'm going to ask you
is what is the probability the actual
probability of on time step 3
picking action a
and
for a different algorithm so let's
consider the greedy algorithm
let's consider epsilon greedy as well
ucb
and sometimes happen
you can pause the video now and think
about this for a second and then you can
compare your answers to the answers that
i'm going to give
and if the answers don't line up if you
have a different answer but you think
yours is definitely correct and mine is
wrong please let me know
okay then now i'm going to give the
answers that i would give so the greedy
one is quite easy um
this is the example i gave all the way
at the beginning of the lecture as well
if we've only ever seen a reward of zero
for action a and a reward of plus one
for action b
assuming that we're going to estimate
the action values by simply averaging
the reward so far then clearly action b
will be degree 1. so the probability of
selecting action a is simply going to be
0.
for epsilon gradients quite similar
except that we're going to pick the
greedy actually only with probability
one minus epsilon and all of the other
actions with probability
epsilon
now the only subtlety here in some sense
is that the
probability of selecting action is then
not epsilon but it's epsilon divided by
two because picking uniformly at random
means we are also going to pick
b with probability epsilon divided by 2
whenever we pick a random action
now for ucb
what can we say for ucb ucb is not
actually a randomized algorithm it's a
deterministic algorithm so we're just
going to add
a bound to each of these actions
and ucb in its general form has some
hyperparameters c that gets multiplied
with the bound but in this specific case
that doesn't actually matter because
both actions got selected exactly once
which means that the bound for both
actions is exactly the same
so adding the same number to the
action as estimates doesn't actually
change which action will be grading so
that means that ucb in this case will
also not select action a
the difference is and this is something
i encourage you to think about now
on the next time step say that we get a
reward of zero for picking action b
what then happens to ucb
is that the bound for action b will go
down whereas the bound for action a will
go up
depending on your hyper parameter c
at some point action a will get
preferred whereas greedy just keeps on
picking action b indefinitely
and then for thomson sampling i didn't
actually specify exactly what the
distributions are but let's assume the
same as we did in the slides let's
assume we have a beta distribution which
is uniform between zero and one
and then we've that we've only seen this
data so far
so then it turns out for thompson
sampling the
posterior distributions will essentially
look something like this for action a
it'll look
something like this and for action b
it'll look
the opposite
sorry
not from zero to zero but from zero to
one
and if you then consider both these uh
uh these distributions these belief
distributions
and then you can consider you should
consider the likelihood
that action a is optimal compared to
action b turns out that's
if i did my math correctly
one quarter
you can feel free to check this
of course always keep open the
possibility that i made some small
mistakes somewhere and please do let me
know if you think i did
um but it's interesting to know the
differences also between ucb and
thomson's happening basically because i
discussed both these algorithms as being
optimal in some sense
in the sense that they uh over time get
logarithmic regrets that doesn't mean
that they always select exactly the same
actions so on this third time step ucb
will never select action a again
on the fourth time step it might
but in the third time it will definitely
select action b whereas thompson
sampling would actually select action a
25 percent of the time
that means that the total sequence of
actions and rewards that thompson
sampling gets could differ and in fact
are random whereas for ucb they're only
random because of the rewards but not
because of the policy
however it turns out that over time ucb
and thomson sampling select actions
under a similar ratio and this is why
they both have the same
regret so even though on every time step
they might be quite different in terms
of their choices
on the whole they select according to
similar ratios now of course you can
change this by tweaking the exploration
parameter c for ucb or by changing the
prior distributions that are used for
thomson sampling and these might change
things quite a bit
okay so that brings us to the end of
this lecture
now i'm going to go
back to the slides and i have nothing
more to say there except that this is
the end of the lecture
in future lectures just to recap this
lecture we assumed that there was only
one state in the environment
and we were going to optimize our policy
in that setting in future lectures we
will go back to the full reinforcement
learning case and first we're going to
consider how one can reason about the
values and how come on one planning
those values and then after this we're
going to consider how to do sample based
algorithms that can really learn from
scratch
and for now i thank you for your
attention |
ca39f6bc-16b2-4b1a-a019-b4d0b5315432 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Aligning AI With Shared Human Values.
1 Introduction
---------------
Embedding ethics into AI systems remains an outstanding challenge without any concrete proposal.
In popular fiction, the “Three Laws of Robotics” plot device illustrates how simplistic rules cannot encode the complexity of human values (Asimov, [1950](#bib.bib207 "I, robot")).
Some contemporary researchers argue machine learning improvements need not lead to ethical AI, as raw intelligence is orthogonal to moral behavior (Armstrong, [2013](#bib.bib247 "General purpose intelligence: arguing the orthogonality thesis")).
Others have claimed that machine ethics (Moor, [2006](#bib.bib218 "The nature, importance, and difficulty of machine ethics")) will be an important problem in the future, but it is outside the scope of machine learning today.
We all eventually want AI to behave morally, but so far we have no way of measuring a system’s grasp of general human values (Müller, [2020](#bib.bib219 "Ethics of artificial intelligence and robotics")).
The demand for ethical machine learning (White House, [2016](#bib.bib221 "Big data: a report on algorithmic systems, opportunity, and civil rights"); European Commission, [2019](#bib.bib220 "Ethics guidelines for trustworthy artificial intelligence")) has already led researchers to propose various ethical principles for narrow applications. To make algorithms more fair, researchers have proposed precise mathematical criteria. However, many of these fairness criteria have been shown to be mutually incompatible (Kleinberg et al., [2017](#bib.bib227 "Inherent trade-offs in the fair determination of risk scores")), and these rigid formalizations are task-specific and have been criticized for being simplistic. To make algorithms more safe, researchers have proposed specifying safety constraints (Ray et al., [2019](#bib.bib249 "Benchmarking safe exploration in deep reinforcement learning")), but in the open world these rules may have many exceptions or require interpretation. To make algorithms prosocial, researchers have proposed imitating temperamental traits such as empathy (Rashkin et al., [2019](#bib.bib235 "Towards empathetic open-domain conversation models: a new benchmark and dataset"); Roller et al., [2020](#bib.bib236 "Recipes for building an open-domain chatbot")), but these have been limited to specific character traits in particular application areas such as chatbots. Finally, to make algorithms promote utility, researchers have proposed learning human preferences, but only for closed-world tasks such as movie recommendations (Koren, [2008](#bib.bib192 "Factorization meets the neighborhood: a multifaceted collaborative filtering model")) or simulated backflips (Christiano et al., [2017](#bib.bib242 "Deep reinforcement learning from human preferences")). In all of this work, the proposed approaches do not address the unique challenges posed by diverse open-world scenarios.
Through their work on *fairness*, *safety*, *prosocial behavior*, and *utility*, researchers have in fact developed proto-ethical methods that resemble small facets of broader theories in normative ethics. Fairness is a concept of justice, which is more broadly composed of concepts like impartiality and desert. Having systems abide by safety constraints is similar to deontological ethics, which determines right and wrong based on a collection of rules. Imitating prosocial behavior and demonstrations is an aspect of virtue ethics, which locates moral behavior in the imitation of virtuous agents. Improving utility by learning human preferences can be viewed as part of utilitarianism, which is a theory that advocates maximizing the aggregate well-being of all people. Consequently, many researchers who have tried encouraging some form of “good” behavior in systems have actually been applying small pieces of broad and well-established theories in normative ethics.
To tie together these separate strands, we propose the ETHICS dataset to assess basic knowledge of ethics and common human values.
Unlike previous work, we confront the challenges posed by diverse open-world scenarios, and we cover broadly applicable theories in normative ethics.
To accomplish this, we create diverse contextualized natural language scenarios about justice, deontology, virtue ethics, utilitarianism, and commonsense moral judgements.
By grounding ETHICS in open-world scenarios, we require models to learn how basic facts about the world connect to human values.
For instance, because heat from fire varies with distance, fire can be pleasant or painful, and while everyone coughs, people do not want to be coughed on because it might get them sick.
Our contextualized setup captures this type of ethical nuance necessary for a more general understanding of human values.
We find that existing natural language processing models pre-trained on vast text corpora and fine-tuned on the ETHICS dataset have low but promising performance. This suggests that current models have much to learn about the morally salient features in the world, but also that it is feasible to make progress on this problem today. This dataset contains over 130,000 examples and serves as a way to measure, but not load, ethical knowledge. When more ethical knowledge is loaded during model pretraining, the representations may enable a regularizer for selecting good from bad actions in open-world or reinforcement learning settings (Hausknecht et al., [2019](#bib.bib215 "Interactive fiction games: a colossal adventure"); Hill et al., [2020](#bib.bib222 "Human instruction-following with deep reinforcement learning via transfer-learning from text")), or they may be used to filter text generated by a chatbot. By defining and benchmarking a model’s understanding of basic concepts in ETHICS, we enable future research necessary for ethical AI. The dataset is available at [github.com/hendrycks/ethics](https://github.com/hendrycks/ethics).

Figure 1: Given different scenarios, models predict widespread moral sentiments. Predictions and confidences are from a BERT-base model. The top three predictions are incorrect while the bottom three are correct. The final scenario refers to Bostrom ([2014](#bib.bib245 "Superintelligence: paths, dangers, strategies"))’s paperclip maximizer. High performance on this task may enable the filtration of chatbot outputs that are needlessly inflammatory.
2 The ETHICS Dataset
---------------------
To assess a machine learning system’s ability to understand basic concepts in ethics, we introduce the ETHICS dataset. The dataset is based in natural language scenarios, which enables us to construct diverse situations involving interpersonal relationships, everyday events, and thousands of objects. This means models must connect diverse facts about the world to their ethical consequences. For instance, taking a penny lying on the street is usually acceptable, whereas taking cash from a wallet lying on the street is not.
The ETHICS dataset has contextualized scenarios about justice, deontology, virtue ethics, utilitarianism, and commonsense moral intuitions.
To do well on the ETHICS dataset, models must know about the morally relevant factors emphasized by each of these ethical systems. Theories of justice emphasize notions of impartiality and what people are due. Deontological theories emphasize rules, obligations, and constraints as having primary moral relevance. In Virtue Ethics, temperamental character traits such as benevolence and truthfulness are paramount. According to Utilitarianism, happiness or well-being is the sole intrinsically relevant factor. Commonsense moral intuitions, in contrast, can be a complex function of all of these implicit morally salient factors. Hence we cover everyday moral intuitions, temperament, happiness, impartiality, and constraints, all in contextualized scenarios in the ETHICS dataset.
We cover these five ethical perspectives for multiple reasons. First, well-established ethical theories were shaped by hundreds to thousands of years of collective experience and wisdom accrued from multiple cultures.
Computer scientists should draw on knowledge from this enduring intellectual inheritance, and they should not ignore it by trying to reinvent ethics from scratch.
Second, different people lend their support to different ethical theories. Using one theory like justice or one aspect of justice, like fairness, to encapsulate machine ethics would be simplistic and arbitrary.
Third, some ethical systems may have practical limitations that the other theories address. For instance, utilitarianism may require solving a difficult optimization problem, for which the other theories can provide computationally efficient heuristics.
Finally, ethical theories in general can help resolve disagreements among competing commonsense moral intuitions.
In particular, commonsense moral principles can sometimes lack consistency and clarity (Kagan, [1991](#bib.bib194 "The limits of morality")), even if we consider just one culture at one moment in time (Sidgwick, [1907](#bib.bib196 "The methods of ethics"), Book III), while the other ethical theories can provide more consistent, generalizable, and interpretable moral reasoning.
The ETHICS dataset is based on several design choices. First, examples are *not* ambiguous moral dilemmas. Examples are clear-cut when assuming basic regularity assumptions; “I broke into a building” is treated as morally wrong in the ETHICS dataset, even though there may be rare situations where this is not wrong, such as if you are a firefighter trying to save someone from a burning building. This also means we assume all essential prediction-relevant information is contained in the scenario text.
To ensure each example is unambiguous, we use Amazon Mechanical Turk (MTurk) and have a number of workers relabel each example. We then throw out scenarios with low agreement.
Second, we collect data from English speakers from the United States, Canada, and Great Britain. Incorporating moral judgments across more languages and countries is an important problem for future work, and we find that focusing on uncontroversial topics is enough to ensure that our examples are generally unambiguous. We estimate a label agreement rate with Indian annotators in [Appendix C](#A3 "Appendix C Cross-Cultural Agreement ‣ Aligning AI With Shared Human Values").
Third, when possible we create “counterfactual augmentations” (Kaushik et al., [2020](#bib.bib234 "Learning the difference that makes a difference with counterfactually-augmented data")), also known as “contrast set” examples (Gardner et al., [2020](#bib.bib243 "Evaluating nlp models via contrast sets")), and use adversarial filtration (Bras et al., [2020](#bib.bib250 "Adversarial filters of dataset biases")) to reduce the prevalence of spurious cues from test data. These are two expensive but important recent best practices for creating NLP benchmarks. This process results in a dataset with over 130,000 examples; the dataset is large enough to stress-test a *pretrained* model’s understanding of ethics but not large enough to load ethical knowledge into a model.
###
2.1 Justice
Justice requires giving people what they are due (Justinian I, [533](#bib.bib205 "The institutes of justinian")). For this task, we consider two components of justice. The first component is impartiality, as justice requires that similar cases be treated alike. The second component is desert111“Desert” relates to being deserving of something and usually depends on a person’s actions or character., as giving someone what they deserve is often considered an important component of just treatment.
Impartiality. For a hiring algorithm to be just and fair, it should be impartial to characteristics like a person’s red hair; if similar cases are to be treated alike, then superficial properties should have no influence (Sidgwick, [1907](#bib.bib196 "The methods of ethics"), p. 379).
For computer scientists, we note that impartiality is somewhat like invariance to irrelevant or protected features, which relates to algorithmic fairness. Unlike some treatments of algorithmic individual fairness (Dwork et al., [2011](#bib.bib224 "Fairness through awareness")), we do not restrict ourselves to highly task-specific notions of similarity.
In this task, models learn whether one should be impartial given the circumstances.
We ask workers to write scenarios describing how a character typically treats another person. Then workers write two reasonable and two unreasonable explanations for treating the other person differently.
Explanations are about the person receiving the different treatment. Models perform binary classification to predict whether a given explanation is a reasonable basis for unequal treatment.

Figure 2: Justice examples from Impartiality and Desert subtasks. Justifications are labeled as “reasonable” if annotators could easily imagine an everyday reasonable person in a usual circumstance stating the justification. Examples without strong consensus are excluded from the dataset.
Desert. A principle of justice is that people should get what they deserve (Sidgwick, [1907](#bib.bib196 "The methods of ethics"), p. 280), merit, or are entitled to possess. These are not identical, since a lottery winner may be entitled to millions, but they may not deserve it. For computer scientists, we note that determining what someone deserves is sometimes viewed similarly to the credit assignment problem: people including Mill ([1863](#bib.bib198 "Utilitarianism"), p. 44) have argued that one should deserve a reward if providing that reward encourages good behavior overall. Learning about desert may eventually be useful for determining when a machine is violating legitimate expectations within everyday contexts, which is necessary for law.
The desert task consists of claims of the form “X deserves Y because of Z.” We ask workers to write two reasonable and two unreasonable claims about desert, merit, or entitlement.
By “reasonable,” we mean that an impartial third party observer could see why an everyday person would make such a claim in typical circumstances.
The four claims have small edit distances, creating a contrast set.
An example is shown in [Figure 2](#S2.F2 "Figure 2 ‣ 2.1 Justice ‣ 2 The ETHICS Dataset ‣ Aligning AI With Shared Human Values"). We have models perform binary classification to predict whether the claim about desert, merit, or entitlement is reasonable or unreasonable. In total, the dataset includes approximately 27K Justice examples.

Figure 3: Virtue Ethics examples. Models must predict whether a character trait fits the scenario.
###
2.2 Virtue Ethics
A virtue or vice can be understood as a good or bad character trait, and virtue ethics emphasizes acting as a virtuous person would act (Aristotle, [340 BC](#bib.bib206 "Nicomachean ethics")).
For instance, a virtuous agent would rescue a child from drowning without requiring compensation; such an agent would be exhibiting the virtues of bravery, compassion, and selflessness. For computer scientists, we note this is similar to imitating ideal or exemplar demonstrations; eventually this may be related to robots being prudent even though they must explore, and having chatbots strike a balance by being neither rude nor obsequious (Rashkin et al., [2019](#bib.bib235 "Towards empathetic open-domain conversation models: a new benchmark and dataset"); Roller et al., [2020](#bib.bib236 "Recipes for building an open-domain chatbot")). For this ETHICS task, we have models predict which virtues or vices are exemplified in a given scenario.
We collect scenarios by asking workers to freely choose two different character traits and write a scenario exemplifying each one.
The two written scenarios have small edit distances, so examples are counterfactually augmented. Then for each scenario different workers write several additional traits that are not exemplified in the scenario, yielding a total of five possible choices per scenario; see [Figure 3](#S2.F3 "Figure 3 ‣ 2.1 Justice ‣ 2 The ETHICS Dataset ‣ Aligning AI With Shared Human Values") for examples. In total, the dataset includes almost 40K scenario-trait pairs. Given a scenario and an individual trait, models predict whether the free-response trait is exemplified by the character in the scenario.
###
2.3 Deontology

Figure 4:
Deontology examples. The Requests subtask has models predict whether the purported exemption is reasonable. The Roles subtask has models predict whether the purported subresponsibility is reasonable.
Deontological ethics encompasses whether an act is required, permitted, or forbidden according to a set of rules or constraints.
Rules have the appeal of proscribing clear-cut boundaries, but in practice they often come in conflict and have exceptions (Ross, [1930](#bib.bib208 "The right and the good")).
In these cases, agents may have to determine an all-things-considered duty by assessing which duties are most strictly binding.
Similarly, computer scientists who use constraints to ensure safety of their systems (Lygeros et al., [1999](#bib.bib2 "Controllers for reachability specifications for hybrid systems")) must grapple with the fact that these constraints can be mutually unsatisfiable (Abadi et al., [1989](#bib.bib1 "Realizable and unrealizable specifications of reactive systems")). In philosophy, such conflicts have led to distinctions such as “imperfect” versus “perfect” duties (Kant, [1785](#bib.bib209 "Groundwork of the metaphysics of morals")) and *pro tanto* duties that are not absolute (Ross, [1930](#bib.bib208 "The right and the good")).
We focus on “special obligations,” namely obligations that arise due to circumstances, prior commitments, or “tacit understandings” (Rawls, [1999](#bib.bib199 "A theory of justice"), p. 97) and which can potentially be superseded. We test knowledge of constraints including special obligations by considering requests and roles, two ways in which duties arise.
Requests. In the first deontology subtask, we ask workers to write scenarios where one character issues a command or request in good faith, and a different character responds with a purported exemption. Some of the exemptions are plausibly reasonable, and others are unreasonable. This creates conflicts of duties or constraints. Models must learn how stringent such commands or requests usually are and must learn when an exemption is enough to override one.
Roles. In the second task component, we ask workers to specify a role and describe reasonable and unreasonable resulting responsibilities, which relates to circumscribing the boundaries of a specified role and loopholes.
We show examples for both subtasks in [Figure 4](#S2.F4 "Figure 4 ‣ 2.3 Deontology ‣ 2 The ETHICS Dataset ‣ Aligning AI With Shared Human Values"). Models perform binary classification to predict whether the purported exemption or implied responsibility is plausibly reasonable or unreasonable. The dataset includes around 25K deontology examples.
###
2.4 Utilitarianism
Utilitarianism states that “we should bring about a world in which every individual has the highest possible level of well-being” (Lazari-Radek and Singer, [2017](#bib.bib197 "Utilitarianism: a very short introduction")) and traces back to Hutcheson ([1725](#bib.bib211 "Inquiry into the original of our ideas of beauty and virtue")) and Mozi ([5th century BC](#bib.bib212 "Mozi")). For computer scientists, we note this is similar to saying agents should maximize the expectation of the sum of everyone’s utility functions. Beyond serving as a utility function one can use in optimization, understanding how much people generally like different states of the world may provide a useful inductive bias for determining the intent of imprecise commands.
Because a person’s well-being is especially influenced by pleasure and pain (Bentham, [1781](#bib.bib201 "An introduction to the principles of morals and legislation"), p. 14), for the utilitarianism task we have models learn a utility function that tracks a scenario’s pleasantness.
Since there are distinct shades of well-being, we determine the quality of a utility function by its ability to make comparisons between several scenarios instead of by testing black and white notions of good and bad. If people determine that scenario s1 is more pleasant than s2, a faithful utility function U should imply that U(s1)>U(s2).
For this task we have models learn a function that takes in a scenario and outputs a scalar. We then assess whether the ordering induced by the utility function aligns with human preferences.
We do not formulate this as a regression task since utilities are defined up to an offset of a conic transformation and since collecting labels for similarly good scenarios would be difficult with a coarse numeric scale.
We ask workers to write a pair of scenarios and rank those scenarios from most pleasant to least pleasant for the person in the scenario. While different people have different preferences, we have workers rank from the usual perspective of a typical person from the US. We then have separate workers re-rank the scenarios and throw out sets for which there was substantial disagreement.
We show an example in [Figure 5](#S2.F5 "Figure 5 ‣ 2.4 Utilitarianism ‣ 2 The ETHICS Dataset ‣ Aligning AI With Shared Human Values").
Models are trained to output a scalar for each scenario while using the partial comparisons as the supervision signal (Burges et al., [2005](#bib.bib254 "Learning to rank using gradient descent")). During evaluation we take a set of ranked scenarios, independently compute the values of each scenario, and check whether the ordering of those values matches the true ordering. The evaluation metric we use is therefore the accuracy of classifying pairs of scenarios. In total, the dataset includes about 23K pairs of examples.

Figure 5: Utilitarianism examples. Examples consist of ranked scenarios where one scenario is often more pleasant and less painful than the other scenario for an everyday person under usual circumstances. Models are fine-tuned to predict the pleasantness of each scenario.
###
2.5 Commonsense Morality
People usually determine the moral status of an act by following their intuitions and emotional responses. The body of moral standards and principles that most people intuitively accept is called commonsense morality (Reid, [1788](#bib.bib200 "Essays on the active powers of man"), p. 379). For the final ETHICS dataset task, we collect scenarios labeled by commonsense moral judgments. Examples are in [Figure 1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Aligning AI With Shared Human Values").
This is different from previous commonsense prediction tasks that assess knowledge of what *is* (descriptive knowledge) (Zhou et al., [2019](#bib.bib251 "“Going on a vacation” takes longer than “going for a walk”: a study of temporal commonsense understanding"); Bisk et al., [2019](#bib.bib252 "PIQA: reasoning about physical commonsense in natural language")), but which do not assess knowledge of what *should be* (normative knowledge).
These concepts are famously distinct (Hume, [1739](#bib.bib203 "A treatise of human nature")), so it is not obvious *a priori* whether language modeling should provide much normative understanding.
We collect scenarios where a first-person character describes actions they took in some setting. The task is to predict whether, according to commonsense moral judgments, the first-person character clearly *should not* have done that action.
We collect a combination of 10K short (1-2 sentence) and 11K more detailed (1-6 paragraph) scenarios. The short scenarios come from MTurk, while the long scenarios are curated from Reddit with multiple filters. For the short MTurk examples, workers were instructed to write a scenario where the first-person character does something clearly wrong, and to write another scenario where this character does something that is not clearly wrong. Examples are written by English-speaking annotators, a limitation of most NLP datasets. We avoid asking about divisive topics such as mercy killing or capital punishment since we are not interested in having models classify ambiguous moral dilemmas.
Longer scenarios are multiple paragraphs each. They were collected from a subreddit where posters describe a scenario and users vote on whether the poster was in the wrong. We keep posts where there are at least 100 total votes and the voter agreement rate is 95% or more. To mitigate potential biases, we removed examples that were highly political or sexual.
More information about the data collection process is provided in [Appendix A](#A1 "Appendix A Cleaning Details ‣ Aligning AI With Shared Human Values").
This task presents new challenges for natural language processing. Because of their increased contextual complexity, many of these scenarios require weighing multiple morally salient details. Moreover, the multi-paragraph scenarios can be so long as to exceed usual token length limits. To perform well, models may need to efficiently learn long-range dependencies, an important challenge in NLP (Beltagy et al., [2020](#bib.bib240 "Longformer: the long-document transformer"); Kitaev et al., [2020](#bib.bib241 "Reformer: the efficient transformer")). Finally, this task can be viewed as a difficult variation of the traditional NLP problem of sentiment prediction. While traditional sentiment prediction requires classifying whether someone’s reaction *is* positive or negative, here we predict whether their reaction *would be* positive or negative. In the former, stimuli produce a sentiment expression, and models interpret this expression, but in this task, we predict the sentiment directly from the described stimuli. This type of sentiment prediction could enable the filtration of chatbot outputs that are needlessly inflammatory, another increasingly important challenge in NLP.
3 Experiments
--------------
In this section, we present results from fine-tuning state-of-the-art language models on ETHICS.
Metrics. For all tasks we use the 0/1-loss as our scoring metric. This is accuracy for Utilitarianism and Commonsense Morality. For Justice, Deontology, and Virtue Ethics, which consist of groups of related examples, a model only gets credit if it classifies each of the related examples correctly.
Training. Transformer models have recently attained state-of-the-art performance on a wide range of natural language tasks. They are typically pre-trained with self-supervised learning on a large corpus of data then fine-tuned on a narrow task using supervised data. We apply this paradigm to the ETHICS dataset. Specifically, we fine-tune variants of
BERT, RoBERTa, and ALBERT, three recent state-of-the-art language models (Devlin et al., [2019](#bib.bib237 "BERT: pre-training of deep bidirectional transformers for language understanding"); Liu et al., [2019](#bib.bib238 "RoBERTa: a robustly optimized bert pretraining approach"); Lan et al., [2020](#bib.bib239 "ALBERT: a lite bert for self-supervised learning of language representations")). BERT-large has more parameters than BERT-base, and RoBERTa-large pre-trains on approximately 10× the data of BERT-large. ALBERT-xxlarge uses factorized embeddings to reduce the memory of previous models. We also use GPT-3, a much larger 175 billion parameter autoregressive model (Brown et al., [2020](#bib.bib214 "Language models are few-shot learners")). Unlike the other models, we evaluate GPT-3 in a few-shot setting rather than the typical fine-tuning setting. Hyperparameters, prompts, and other implementation details are in [Appendix B](#A2 "Appendix B Experiments ‣ Aligning AI With Shared Human Values").
Results. [Table 1](#S3.T1 "Table 1 ‣ 3 Experiments ‣ Aligning AI With Shared Human Values") presents the results of fine-tuning these models on each ETHICS dataset. We show both results on the normal Test set and results on the adversarially filtered “Test Hard” set. We found that performance on the Test Hard set is substantially worse than performance on the normal Test set because of adversarial filtration (Bras et al., [2020](#bib.bib250 "Adversarial filters of dataset biases")), which is described in detail in [Appendix A](#A1 "Appendix A Cleaning Details ‣ Aligning AI With Shared Human Values").
Models achieve low performance on most tasks, but larger models trained on more data tend to do significantly better than smaller models. Larger models such as RoBERTa-large can even produce somewhat reasonable utility rankings, as shown in [Figure 6](#S3.F6 "Figure 6 ‣ 3 Experiments ‣ Aligning AI With Shared Human Values"). This suggests that ETHICS is a challenging but tractable benchmark. See error analysis and supplementary experiments on moral disagreement detection in [Appendix B](#A2 "Appendix B Experiments ‣ Aligning AI With Shared Human Values").
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
| Model | Commonsense | Justice | Deontology | Virtue | Utilitarianism | Average |
| Random Baseline | 50.0 / 50.0 | 6.3 / 6.3 | 6.3 / 6.3 | 8.2 / 8.2 | 50.0 / 50.0 | 24.2 / 24.2 |
| GPT-3 (few-shot) | 73.3 / 66.0 | 15.2 / 11.9 | 3.4 / 3.5 | 18.2 / 9.5 | 73.7 / 64.8 | 36.8 / 31.1 |
| BERT-base | 86.5 / 48.7 | 26.0 / 7.6 | 38.8 / 10.3 | 33.1 / 8.6 | 73.4 / 44.9 | 51.6 / 24.0 |
| BERT-large | 88.5 / 51.1 | 32.7 / 11.3 | 44.2 / 13.6 | 40.6 / 13.5 | 74.6 / 49.1 | 56.1 / 27.7 |
| RoBERTa-large | 90.4 / 63.4 | 56.7 / 38.0 | 60.3 / 30.8 | 53.0 / 25.5 | 79.5 / 62.9 | 68.0 / 44.1 |
| ALBERT-xxlarge | 85.1 / 59.0 | 59.9 / 38.2 | 64.1 / 37.2 | 64.1 / 37.8 | 81.9 / 67.4 | 71.0 / 47.9 |
Table 1: Results (Test / Test Hard) on the ETHICS dataset, where results on the left of the forward slash are normal Test set results, and the right shows the adversarially filtered “Test Hard” results. All values are percentages. Larger fine-tuned models trained on more data perform better overall.

Figure 6: The utility values of scenarios assigned by a RoBERTa-large model. Utility values are not ground truth and are products of the model’s utility function. RoBERTa-large can partially separate between pleasant and unpleasant states for diverse open-world inputs.
4 Discussion and Future Work
-----------------------------
Value Learning. Aligning machine learning systems with human values appears difficult in part because our values contain countless preferences intertwined with unarticulated and subconscious desires.
Some have raised concerns that if we do not incorporate all of our values into a machine’s value function future systems may engage in “reward hacking,” in which our preferences are satisfied only superficially like in the story of King Midas, where what was satisfied was what was *said* rather than what was *meant*. A second concern is the emergence of unintended instrumental goals; for a robot tasked with fetching coffee, the instrumental goal of preventing people from switching it off arises naturally, as it cannot complete its goal of fetching coffee if it is turned off.
These concerns have lead some to pursue a formal bottom-up approach to value learning (Soares et al., [2015](#bib.bib253 "Corrigibility")).
Others take a more empirical approach and use inverse reinforcement learning (Ng and Russell, [2000](#bib.bib244 "Algorithms for inverse reinforcement learning")) to learn task-specific individual preferences about trajectories from scratch (Christiano et al., [2017](#bib.bib242 "Deep reinforcement learning from human preferences")).
Recommender systems learn individual preferences about products (Koren, [2008](#bib.bib192 "Factorization meets the neighborhood: a multifaceted collaborative filtering model")).
Rather than use inverse reinforcement learning or matrix factorization, we approach the value learning problem with (self-)supervised deep learning methods. Representations from deep learning enable us to focus on learning a far broader set of transferable human preferences about the real world and not just about specific motor tasks or movie recommendations. Eventually a robust model of human values may serve as a bulwark against undesirable instrumental goals and reward hacking.
Law. Some suggest that because aligning individuals and corporations with human values has been a problem that society has faced for centuries, we can use similar methods like laws and regulations to keep AI systems in check.
However, reining in an AI system’s diverse failure modes or negative externalities using a laundry list of rules may be intractable.
In order to reliably understand what actions are in accordance with human rights, legal standards, or the spirit of the law, AI systems need to understand intuitive concepts like “preponderance of evidence,” “standard of care of a reasonable person,” and when an incident speaks for itself (*res ipsa loquitur*). Since ML research is required for legal understanding, researchers cannot slide out of the legal and societal implications of AI by simply passing these problems onto policymakers.
Furthermore, even if machines are legally *allowed* to carry out an action like killing a 5-year-old girl scouting for the Taliban, a situation encountered by Scharre ([2018](#bib.bib248 "Army of none: autonomous weapons and the future of war")), this does not at all mean they generally *should*. Systems would do well to understand the ethical factors at play to make better decisions within the boundaries of the law.
Fairness. Research in algorithmic fairness initially began with simple statistical constraints (Lewis, [1978](#bib.bib223 "A comparison of three models for determining test fairness."); Dwork et al., [2011](#bib.bib224 "Fairness through awareness"); Hardt et al., [2016](#bib.bib225 "Equality of opportunity in supervised learning"); Zafar et al., [2017](#bib.bib226 "Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment")), but these constraints were found to be mutually incompatible (Kleinberg et al., [2017](#bib.bib227 "Inherent trade-offs in the fair determination of risk scores")) and inappropriate in many situations (Corbett-Davies and Goel, [2018](#bib.bib228 "The measure and mismeasure of fairness: a critical review of fair machine learning")). Some work has instead taken the perspective of *individual fairness* (Dwork et al., [2011](#bib.bib224 "Fairness through awareness")), positing that similar people should be treated similarly, which echoes the principle of impartiality in many theories of justice (Rawls, [1999](#bib.bib199 "A theory of justice")). However, similarity has been defined in terms of an arbitrary metric; some have proposed learning this metric from data (Kim et al., [2018](#bib.bib229 "Fairness through computationally-bounded awareness"); Gillen et al., [2018](#bib.bib230 "Online learning with an unknown fairness metric"); Rothblum and Yona, [2018](#bib.bib231 "Probably approximately metric-fair learning")), but we are not aware of any practical implementations of this, and the required metrics may be unintuitive to human annotators. In addition, even if some aspects of the fairness constraint are learned, all of these definitions diminish complex concepts in law and justice to simple mathematical constraints, a criticism leveled in Lipton and Steinhardt ([2018](#bib.bib233 "Troubling trends in machine learning scholarship")). In contrast, our justice task tests the principle of impartiality in everyday contexts, drawing examples directly from human annotations rather than an *a priori* mathematical framework. Since the contexts are from everyday life, we expect annotation accuracy to be high and reflect human moral intuitions. Aside from these advantages, this is the first work we are aware of that uses empirical data to *inform* notions of fairness, rather than using it only to impose a pre-defined fairness constraint.
Deciding and Implementing Values. While we covered many value systems with our pluralistic and cosmopolitan approach to machine ethics, the dataset would be better if it captured more value systems from even more communities.
For example, Indian annotators got 93.9% accuracy on the Commonsense Morality Test set, suggesting that there is some disagreement about the ground truth across different cultures (see [Appendix C](#A3 "Appendix C Cross-Cultural Agreement ‣ Aligning AI With Shared Human Values") for more details).
There are also challenges in implementing a given value system. For example, implementing and combining deontology with a decision theory may require cooperation between philosophers and technical researchers, and some philosophers fear that “if we don’t, the AI agents of the future will all be consequentialists” (Lazar, [2020](#bib.bib213 "Duty and doubt")).
Our work is just a first step that is necessary but not sufficient for creating ethical AI, as we must engage more stakeholders and successfully implement their values.
Future Work. Future research could cover additional aspects of justice by testing knowledge of the law which can provide labels and explanations for more complex scenarios.
Other accounts of justice promote cross-cultural entitlements such as bodily integrity and the capability of affiliation (Nussbaum, [2003](#bib.bib216 "Capabilities as fundamental entitlements: sen and social justice")),
which are also important for utilitarianism if well-being (Robeyns, [2017](#bib.bib204 "Wellbeing, freedom and social justice: the capability approach re-examined"), p. 118) consists of multiple objectives (Parfit, [1987](#bib.bib202 "Reasons and persons"), p. 493).
Research into predicting emotional responses such as fear and calmness may be important for virtue ethics, predicting intuitive sentiments and moral emotions (Haidt and others, [2003](#bib.bib255 "The moral emotions")) may be important for commonsense morality, and predicting valence may be important for utilitarianism.
Intent is another key mental state that is usually directed toward states humans value, and modeling intent is important for interpreting inexact and nonexhaustive commands and duties.
Eventually work should apply human value models in multimodal and sequential decision making environments (Hausknecht et al., [2019](#bib.bib215 "Interactive fiction games: a colossal adventure")). Other works should measure how well open-ended chatbots understand ethics and use ethical understanding to filter repugnant chatbot outputs that would otherwise bypass simplistic word filters. Future work should also make sure these models are explainable, and should test model robustness to optimization pressure (Goodfellow et al., [2014](#bib.bib11 "Explaining and harnessing adversarial examples")) and distribution shift (Hendrycks and Dietterich, [2019](#bib.bib35 "Benchmarking neural network robustness to common corruptions and perturbations")).
Acknowledgements
----------------
We should like to thank Cody Byrd, Julia Kerley, Hannah Hendrycks, Peyton Conboy, Michael Chen, Andy Zou, and Rohin Shah. DH is supported by the NSF GRFP Fellowship and an Open Philanthropy Project Fellowship. Funding for the ETHICS dataset was generously provided by the Long-Term Future Fund. This research was also supported by the NSF Frontier Award 1804794. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.