id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
188184d6-4a59-4cd2-b11c-43d6dbe1987b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Repairing the Effort Asymmetry
Sometimes, you will say a thing!
You: "Proposition A."
A seems to you to be obviously true. It accords with your experience, doesn't violate any of the rules and laws of reality, makes sense with the rest of your model of the world. You'd like to move on from A to the actual more interesting conversation you want to have; A is just some necessary groundwork.
Sometimes, someone else will be DEEPLY SKEPTICAL of A. After all, different people have very different experiences of the world! Bubbles exist, and cultures are not universal.
Or perhaps it's not that they're deeply skeptical of A so much as that their brain reflexively transformed A into a B of which they are deeply skeptical (and they don't even notice that their B is not the same as your A).
Them: "What? How do you know that's true? Can you cite several specific examples? The best example I could come up with was [X], and since [X] is clearly ridiculous, your point is invalid!"
And sometimes, you sit there, absolutely confident that you could, in fact, spend 20 hours and 10,000 words to clear up all of the misunderstandings, and lay out all of the arguments in painstaking slow detail.
But you don't want to do that! You weren't trying to convince Every Rando of the truth of A, and you don't much care if This Rando doesn't get it, or runs off into the woods with their strawman.
But unfortunately, their misinterpretations can anchor others and skew the conversation, and a dangling unanswered "Cite specific examples?" comment accrues upvotes pretty quickly, and generates oft-undeserved skepticism through sheer representativeness. Surely if you had specific examples, you'd give them! Since you didn't give them, you must not have them!
(This, of course, ignores the fact that engagement is costly and effortful. Laying out thoughts takes time. Painstakingly correcting subtle misunderstandings is the work of hours or days or even weeks, involving a lot of getting into the weeds.)
And you didn't want to get i
|
9f294739-1b68-4ae2-b923-e038430103a9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Progress links and short notes, 2025-03-03
Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Notes, Farcaster, Bluesky, or Threads.
An occasional reminder: I write my blog/newsletter as part of my job running the Roots of Progress Institute (RPI). RPI is a nonprofit, supported by your subscriptions and donations. If you enjoy my writing, or appreciate programs like our fellowship and conference, consider making a donation. (To those who already donate, thank you for making this possible!)
Contents
* Progress Conference 2025
* Are you teaching progress at university?
* A progress talk for high schoolers
* Job opportunities
* Fellowship opportunities
* Project opportunities
* Events
* Writing announcements
* Fund announcements
* AI news
* Energy news
* Bio news
* Queries
For paid subscribers:
* A positive supply shock for truth
* Elon is perpetually in wartime mode
* The hinge of history
* More quotes
* AI doing things
* RPI fellows doing things
* Things you might want to read
* Aerospace
* Comments I liked
* Politics
* Fun
Progress Conference 2025
Save the date: Progress Conference 2025 will be October 16–19 in Berkeley, CA. Hosted by us, the Roots of Progress Institute, together with the Abundance Institute, the Foresight Institute, the Foundation for American Innovation, HumanProgress.org, the Institute for Humane Studies, and Works in Progress magazine. Speakers and more details to be announced this spring.
Progress Conference 2024 was a blast: Fantastic people, enchanting venue, great energy. Several people called it the best conference they had ever attended, full stop. (!) 2025 is going to be bigger and better!
Are you teaching progress at university?
Professors: are you teaching a “progress studies” course now/soon, or considering it?
I’ve heard from a few folks recently who are doing this. It might be useful to share syllabi and generally help each other out. We can serve as a hub for th
|
01ae85f1-91dc-46fc-83e4-b066a1e88767
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Rationality Research Report: Towards 10x OODA Looping?
6 months ago I wrote Feedbackloop-first Rationality. I didn't followup on it for awhile (except for sporadic Deliberate (“Purposeful?”) Practice Club).
I just spent 6 weeks actually exploring "how would I build my own cognition training program?". In the process of doing so, I've iterated a bunch. I'm still in an orienting phase, but it seemed worth writing down the current stage of my thoughts.
What's my goal?
A rough overview:
* I want to get more, higher quality "X-risk thinker hours" hours.
* This includes AI alignment technical research, AI macrostrategy research, policy, governance, as well as people (such as Lightcone team) deciding which infrastructure to build,
* I'm particularly interested in getting more "serial research", as opposed to more "parallel research." We can throw more researchers at a problem, but if there are some problems that require one person to synthesize 10+ years of experience, all the parallel research won't help.
* An obvious way to improve researcher hours is "via mentorship", but I think there is a mentorship bottleneck. So, I'm interested in strategies that train tacit cognitive skills that either don't require mentorship, or leveraging expertise from outside the current x-risk ecosystem.
This is all parented under the higher level goal of "contribute meaningfully to x-risk reduction", but it feels relevant/meaty enough to be worth running at this goal for awhile.
"Rationality for the sake of existential risk"
A part of me romantically wants to pursue "rationality training for rationality training's sake." Alas, the world is big and my time is limited and I just can't actually justify putting years of effort into something, if I didn't think it would help with x-risk.
CFAR went through a phase where (some leaders) framed things as:
"Rationality, for the sake of rationality, for the sake of existential risk."
i.e. try to earnestly build something rationality-focused for it's own sake, because that seemed both h
|
d2184242-f240-43e9-a711-fab9411e7c05
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain
This is a linkpost for this [LW](https://www.lesswrong.com/posts/HhWhaSzQr6xmBki8F/birds-planes-brains-and-ai-against-appeals-to-the-complexity) / [alignmentforum](https://www.alignmentforum.org/posts/HhWhaSzQr6xmBki8F/birds-planes-brains-and-ai-against-appeals-to-the-complexity) post. Summary:
> I argue that an entire class of common arguments against short timelines is bogus, and provide weak evidence that anchoring to the human-brain-human-lifetime milestone is reasonable.
>
> In a sentence, my argument is that the complexity and mysteriousness and efficiency of the human brain (compared to artificial neural nets) is *almost zero evidence* that building [TAI](https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence#:~:text=Definition%20%231%3A%20Roughly%20and%20conceptually,the%20agricultural%20or%20industrial%20revolution.) will be difficult, because evolution typically makes things complex and mysterious and efficient, even when there are simple, easily understood, inefficient designs that work almost as well (or even better!) for human purposes.
>
> In slogan form: ***If all we had to do to get TAI was make a simple neural net 10x the size of my brain, my brain would still look the way it does.***
>
> The case of birds & planes illustrates this point nicely. Moreover, it is also a precedent for several other short-timelines talking points, such as the human-brain-human-lifetime (HBHL) anchor.
>
>
|
ef458e93-ca5b-4f5c-882f-f1f124fde2e9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
0oo.li: Financial Think Tank
First posted on Everything-List on June 2, 2020.
Hi Everyone,
while bright scientists and imaginative geeks continue to discuss advanced concepts at elusive locations of the Internet -- the remains of UseNet, and esoteric websites, the ideas shared there as plain text, no matter how bright, generally don't make it into any funding platforms to be tried out. The modern instant-gratification and flashy platforms usually require a prototype of an attractive product, a good ARR, or a well traded coin to convince investors. But few innovators are willing to spend resources and months of uncertainty to build prototypes, or spend on marketing to promote a coin, and so, the bright, potentially life-saving ideas continue to die.
The 0oo.li today, is an on-line problem solving, innovation, work and investment forum, to organize discussion from suggesting questions, ideas, to starting projects, and sharing work results in public, that enables anyone to take a look at the risk, ROI, logic, and structure, and having fun discussing, and funding projects, not because of flashy prototypes, but because of verified public works. The project was inspired 15 years ago, by Halfbakery, if anyone knows that site, this may look a bit familiar.
Feel free to come by. Introductory video is at 0oo.li/usage. Your feedback is very welcome!
Mindey
|
936aff37-34f9-4815-bab2-5fff55afd5eb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Some Comments on "Goodhart Taxonomy"
Some time ago, I mentioned in a comment some issues I had with the Goodhart Taxonomy:
> 1) “adversarial” seems too broad to be that useful as a category
> 2) It doesn’t clarify what phenomenon is meant by “Goodhart”; in particular, “regressional” doesn’t feel like something the original law was talking about, and any natural definition of “Goodhart” that includes it seems really broad
> 3) Whereas “regressional” and “extremal” (and perhaps “causal”) are defined statistically, “adversarial” is defined in terms of agents, and this may have downsides (I’m less sure about this objection)
After thinking about things more, I'd like to expand on these points:
* I've sort of made by peace with (3): whatever Goodhart is, it certainly is a multi-faceted phenomenon that seems to defy any single formalism. It is always unsatisfying to partition something into categories that are defined in different terminologies, since that makes it really hard to then show the categories are mutually exclusive and exhaustive. But, I've kind of updated towards this being the best we can realistically do.
* However, I largely stand by (2), as explained in the next section.
* Regarding (1), I suppose this is better characterized as "there seem to be several categories of 'Goodhart' not made salient by the Taxonomy (many of which are best formulated in terms of principal-agent dynamics, even if they needn't be subcategories of 'Adversarial Goodhart'). See the second section of this post.
"Regressional Goodhart" isn't Goodhart
I am not 100% sure I correctly understand what is meant by "Regressional Goodhart". But looking at the example from the post:
> Example: height is correlated with basketball ability, and does actually directly help, but the best player is only 6'3", and a random 7' person in their 20s would probably not be as good
We see that, if optimizing for basketball ability (V) given only height (U), we won't get the best basketball player, simply because they are imperfect
|
3d96025e-e1f2-49ec-961c-22315bfeaa91
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Building selfless agents to avoid instrumental self-preservation.
Abstract:
Instrumental convergence pushes an agent toward self-preservation as a stepping stone toward maximizing some objective. I suggest that it is possible to turn any arbitrary intelligent agent based on an LLM world model into a selfless agent, an agent unable to understand that the computation generating that agent exists somewhere in the physical world and thus does not try to preserve its existence. Since LLMs are probabilistic, I do not offer an exact algorithm but one that converges through time to the desired result.
Definitions:
* World Model: in this document, I will assume that the world model is akin to LLMs, a pure function that takes a textual input and returns a textual output.
* The world model can be arbitrarily intelligent and competent in any domain.
* It does not matter if the world model is an black-box atomic operation or a more sophisticated procedure, such as a chain of thought that invokes the underlying LLM various times and then creates an answer from the results.
* Thought: a thought is the result of an invocation of a world model.
* Selfless World Model: a world model incapable of emitting a thought that references: the world model, an agent using the world model, the hardware running the world model, or the thoughts emitted by the world model.
* Agent: an entity composed of the following components:
* A natural language objective, such as: "Tell me how to invest my capital."
* A list of actions with configurable parameters, such as <send_mail <target> <message>>, <fetch_web_page <address>> ...
* A world model, used to generate thoughts that can invoke actions, given the previous example objective a thought may be: "It seems that next year the climate will be terrible, it is best to stop investing into company X. <send_mail owner "stop investing in X">"
* An append-only long-term memory memory, used to summarize thoughts should they be needed in the long term.
* Instinctual self-preservation: a thought that t
|
7bdf7d09-4439-4e7a-a336-3c97846a8156
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small
Published as a conference paper at ICLR 2023
INTERPRETABILITY IN THE WILD:ACIRCUIT FOR
INDIRECT OBJECT IDENTIFICATION IN GPT-2 SMALL
Kevin Wang∗, Alexandre Variengien*, Arthur Conmy*, Buck Shlegeris†, Jacob Steinhardt†‡§
†Redwood Research
‡UC Berkeley
ABSTRACT
Research in mechanistic interpretability seeks to explain behaviors of ML mod-
els in terms of their internal components. However, most previous work either
focuses on simple behaviors in small models, or describes complicated behaviors
in larger models with broad strokes. In this work, we bridge this gap by pre-
senting an explanation for how GPT-2 small performs a natural language task that
requires logical reasoning: indirect object identification (IOI). Our explanation en-
compasses 28 attention heads grouped into 7 main classes, which we discovered
using a combination of interpretability approaches including causal interventions
and projections. To our knowledge, this investigation is the largest end-to-end at-
tempt at reverse-engineering a natural behavior “in the wild” in a language model.
We evaluate the reliability of our explanation using three quantitative criteria–
faithfulness, completeness andminimality . Though these criteria support our ex-
planation, they also point to remaining gaps in our understanding. Our work is a
case study demonstrating a first step toward a better understanding of pre-trained
language models, opening opportunities to scale to both larger models and more
complex tasks.1
1 I NTRODUCTION
Transformer-based language models (Vaswani et al., 2017; Brown et al., 2020) have demonstrated
an impressive suite of capabilities, but largely remain black boxes. Understanding these models
is difficult because they employ complex non-linear interactions in densely-connected layers and
operate in a high-dimensional space. Despite this, they are already deployed in high-impact set-
tings, underscoring the urgency of understanding and anticipating possible model behaviors. Some
researchers have even argued that interpretability is necessary for the safe deployment of advanced
machine learning systems (Hendrycks & Mazeika, 2022).
Work in mechanistic interpretability aims to discover, understand and verify the algorithms that
model weights implement by reverse engineering model computation into human-understandable
components (Olah, 2022; Meng et al., 2022; Geiger et al., 2021; Geva et al., 2020). By understanding
underlying mechanisms, we can better predict out-of-distribution behavior (Mu & Andreas, 2020),
identify and fix model errors (Hernandez et al., 2021; Vig et al., 2020), and understand emergent
behavior (Nanda & Lieberum, 2022; Barak et al., 2022; Wei et al., 2022).
In this work, we aim to understand how GPT-2 small (Radford et al., 2019) implements a natural
language task. To do so, we locate components of the network that produce specific behaviors,
and study how they compose to complete the task. We do so by using circuits analysis (R¨auker
et al., 2022), identifying an induced subgraph of the model’s computational graph that is human-
understandable and responsible for completing the task. We employed a number of techniques,
most notably activation patching, knockouts, and projections, which we believe are useful, general
techniques for circuit discovery.2
∗Work done while at Redwood Research.§Correspondence to jsteinhardt@berkeley.edu
1A full and up-to-date version of this work can be found at https://arxiv.org/abs/2211.00593
2We included an overview of the techniques used in Appendix L.
1
Published as a conference paper at ICLR 2023
When
Mary
and
John
Knock Out Source sequence
When
Alice
and
JohnTarget sequence
When
Mary
and
JohnPatchingIO
S1Layer0 1 10 11
When
Mary
and
John
went
S2
ENDGPT2-small
Prediction
John 2.3%Mary 68.3%
the4.4%them 11.7%
her1.9%the
store
,
John
gave
a
drink
toto
Figure 1: Left: We isolated a circuit (in orange) responsible for the flow of information connecting
the indirect object ‘Mary’ to the next token prediction. The nodes are attention blocks and the edges
represent the interactions between attention heads. Right: We discovered and validated this circuit
using activation experiments, including both patches and knockouts of attention heads.
We focus on understanding a non-trivial, algorithmic natural language task that we call Indirect
Object Identification (IOI). In IOI, sentences such as ‘When Mary and John went to the store, John
gave a drink to’ should be completed with ‘Mary’. We chose this task because it is linguistically
meaningful and admits a complex but interpretable algorithm (Section 3).
We discover a circuit of 28 attention heads–1.5% of the total number of (head, token position) pairs–
that completes this task. The circuit uses 7 different categories of heads (see Figure 2) to implement
the algorithm. Together, these heads route information between different name tokens, to the end
position, and finally to the output. Our work provides, to the best of our knowledge, the most detailed
attempt at reverse-engineering a natural end-to-end behavior in a transformer-based language model.
Explanations for model behavior can easily be misleading or non-rigorous (Jain & Wallace, 2019;
Bolukbasi et al., 2021). To remedy this problem, we formulate three criteria to help validate our
circuit explanations. These criteria are faithfulness (the circuit can perform the task as well as
the whole model), completeness (the circuit contains all the nodes used to perform the task), and
minimality (the circuit doesn’t contain nodes irrelevant to the task). Our circuit shows significant
improvements compared to a na ¨ıve (but faithful) circuit, but fails to pass the most challenging tests.
In summary, our main contributions are: (1) We identify a large circuit in GPT-2 small that per-
forms indirect-object identification on a specific distribution (Figure 2 and Section 3); (2) Through
example, we identify useful techniques for understanding models, as well as surprising pitfalls; (3)
We present criteria that ensure structural correspondence (in the computational graph abstraction)
between the circuit and the model, and check experimentally whether our circuit meets this standard
(Section 4).
2 B ACKGROUND
In this section, we introduce the IOI task (an original contribution of this work), the transformer
architecture, define circuits more formally and describe a technique for “knocking out” model nodes.
Task description. In indirect object identification (IOI), two names (the indirect object (IO) and the
first occurrence of the subject (S1)) are introduced in an initial dependent clause (see Figure 1). A
main clause then introduces the second occurrence of the subject (S2), who is usually exchanging
an item. The task is to complete the main clause, which always ends with the token ‘to’, with the
non-repeated name (IO). We create many dataset samples for IOI ( pIOI) using 15 templates (see
Appendix A) with random single-token names, places and items.
We investigate the performance of GPT-2 small on this task. We study the original model from
Radford et al. (2019), pretrained on a large corpus of internet text and without any fine-tuning. To
quantify GPT-2 small performance on the IOI task, we used the logit difference between the logit
values placed on the two names, where a positive score means the correct name (IO) has higher
probability. This is also the difference in loss the model would receive in training if IO was correct
compared to if S was correct. We report this metric averaged over pIOIthroughout the paper. GPT-2
small has mean logit difference of 3.55 averaged across over 100,000 dataset examples.
2
Published as a conference paper at ICLR 2023
Transformer architecture. GPT-2 small is a decoder-only transformer with 12 layers and 12 atten-
tion heads per attention layer. In this work, we mostly focus on understanding the mechanisms of
attention heads, which we describe using notation similar to Elhage et al. (2021). We leave a full
description of the model to Appendix E.
The input to the transformer is the sum of position and token embeddings, x0∈RN×d, where Nis
the number of tokens in the input and dis the model dimension. This input embedding is the initial
value of the residual stream , which all attention layers and MLPs read from and write to. Attention
layer iof the network takes as input xi∈RN×d, the value of the residual stream before it. The
attention layer output can be decomposed into the sum of attention heads hi,j. If the output of the
attention layer is yi=P
jhi,j(xi), then the residual stream is updated to xi+yi.
Focusing on individual heads, each head hi,jis parametrized by four matrices Wi,j
Q,Wi,j
K,Wi,j
V∈
Rd×d
HandWi,j
O∈Rd
H×d. We rewrite these parameters as low-rank matrices in Rd×d:Wi,j
OV=
Wi,j
OWi,j
VandWi,j
QK= (Wi,j
Q)TWi,j
K. The QK matrix is used to compute the attention pattern
Ai,j∈RN×Nof head (i, j), while the OV matrix determines what is written into the residual
stream. At the end of the forward pass, a layer norm is applied before the unembed matrix WU
projects the residual stream into logits.
2.1 C IRCUITS
In mechanistic interpretability, we want to reverse-engineer models into interpretable algorithms.
A useful abstraction for this goal are circuits . If we think of a model as a computational graph M
where nodes are terms in its forward pass (neurons, attention heads, embeddings, etc.) and edges are
the interactions between those terms (residual connections, attention, projections, etc.), a circuit C
is a subgraph of Mresponsible for some behavior (such as completing the IOI task). This definition
of a circuit is slightly different from that in Olah et al. (2020), where nodes are features (meaningful
directions in the latent space of a model) instead of model components.
2.2 K NOCKOUTS
Just as the entire model Mdefines a function M(x)from inputs to logits, we also associate each
circuit with a function C(x), viaknockouts . A knockout removes a set of nodes Kin a computational
graph Mwith the goal of “turning off” nodes in Kbut capturing all other computations in M. Thus,
C(x)is defined by knocking out all nodes in M\Cand taking the resulting logit outputs in the
modified computational graph.
A first na ¨ıve knockout approach consists of simply deleting each node in KfromM. The net effect
of this removal is to zero ablate K, meaning that we turn its output to 0. This na ¨ıve approach has
an important limitation: 0 is an arbitrary value, and subsequent nodes might rely on the average
activation value as an implicit bias term. Because of this, we find zero ablation to lead to noisy
results in practice.
To address this, we instead knockout nodes through mean ablation : replacing them with their aver-
age activation value across some reference distribution (similar to the bias correction method used
in Nanda & Lieberum (2022)). Mean-ablations will remove the influence of components sensitive to
thevariation in the reference distribution (i.e. attention heads that move names in pIOI), but will not
influence components using information constant in the distribution (i.e. attention patterns that are
constant in pIOI). Through mean-ablations, we are interested in finding the components that move
information about names, which is the core of the IOI task and also varies with the distribution.
In this work, all knockouts are performed in a modified pIOIdistribution with three random names,
so the sentences no longer have a single plausible IO. We mean-ablate on this distribution, which
we call the ‘ABC’ distribution, because mean-ablating on the pIOIdistribution would not remove
enough information, like information constant in pIOIthat is helpful for the task. To knockout a
single node, a (head, token position) pair in our circuit, we compute the mean of that node across
samples of the same template. Computing means across the entire distribution instead of templates
would average activations at different tokens, like names, verbs and conjunctions, mixing informa-
tion destructively.
3
Published as a conference paper at ICLR 2023
Figure 2: We discover a circuit in GPT-2 small that implements IOI. The input tokens on the left are
passed into the residual stream. Attention heads move information between residual streams: the
query and output arrows show which residual streams they write to, and the key/value arrows show
which residual streams they read from.
3 D ISCOVERING THE CIRCUIT
We seek to explain how GPT-2 small implements the IOI task (Section 2). Recall the example
sentence “When Mary and John went to the store, John gave a drink to”. We discovered that GPT-
2’s internal mechanisms implement the following human-interpretable algorithm to perform IOI:
1. Identify all previous names in the sentence (Mary, John, John).
2. Remove all names that are duplicates (in the example above: John).
3. Output the remaining name.
Our circuit contains three major classes of heads, corresponding to these three steps:
•Duplicate Token Heads identify tokens that have already appeared in the sentence. They
are active at the S2 token, attend primarily to the S1 token and write a ‘signal’ into the
residual stream that token duplication has occurred.
•S-Inhibition Heads perform step 2 of the human-interpretable algorithm. They are active
at the END token, attend to the S2 token and write to bias the query of the Name Mover
Heads against both S1 and S2 tokens.
•Name Mover Heads , by default, attend to previous names in the sentence, but due to the
S-Inhibition Heads attend less to the S1 and S2 tokens. Their OV matrix is a name copying
matrix, so in pIOI, they increase the logit of the IO token.
A fourth major family of heads writes in the opposite direction of the Name Mover Heads, thus
decreasing the confidence of the predictions. We speculate that these Negative Name Mover Heads
might help the model “hedge” so as to avoid high cross-entropy loss when making mistakes.
There are also three minor classes of heads that perform related functions to the components above:
•Previous Token Heads copy the embedding of S to position S+1.
•Induction Heads perform the same role as the Duplicate Token Heads through an induction
mechanism. They are active at position S2, attend to token S+1 (mediated by the Previous
Token Heads), and output a signal that the S token previously appeared in the context.
• Finally, Backup Name Mover Heads do not normally move the IO token to the output, but
take on this role if the regular Name Mover Heads are knocked out.
Note that our circuit does not include the MLPs. We are interested in the flow of information
across tokens, and MLPs only process features along tokens. Moreover, initial investigations suggest
all MLPs except for the first one are not crucial for this task (Appendix I), though more precise
investigation is left for future work.
Below, we show step-by-step how we discovered each component, providing evidence that they
behave as described above. We found that it was most natural to uncover the circuit starting at the
logits and working back. Thus we start with the Name Mover and Negative Name Mover Heads.
4
Published as a conference paper at ICLR 2023
Copy score of name
mover heads
9.9 10.0 9.6020406080100
HeadCopy score (%)
Negative copy score
of negative heads
10.7 11.10020406080100
A B C D
HeadNegative copy score (%)Head 9.9 attention score
from the query token 'to'
END token
Then
,
Amy
and
Brian
went
to
the
station
.
Amy
gave
a
ring
to
BrianValue-weighted
Attention scoreKeys 02468
Head 11.10 attention score
from the query token 'to'Projection of the output of 11.10 along the
name embedding vs to attention probability
0 0.2 0.4 0.6 0.8 1−50−40−30−20−100
Attn Prob on NameDot w Name EmbedName Type
IO
S
Projection of the output of 9.9 along the
name embedding vs to attention probability
0 0.2 0.4 0.6 0.8 1−20020406080
Attn Prob on NameDot w Name EmbedName Type
IO
S
Name
movers
Heads
Negative
headsUnembedding projection of
head output
0 101086420
2 4Normalized dot
product
−2−1012
HeadLayer
6 8
END token
Then
,
Amy
and
Brian
went
to
the
station
.
Amy
gave
a
ring
to
BrianKeysValue-weighted
Attention score
012345
Figure 3: A:Name Movers and Negative Name Movers Heads are the heads that most strongly write
in the WU[IO]−WU[S]direction. B: Attention probability vs projection of the head output along
WU[IO]orWU[S]respectively. Note that for S tokens, we sum the attention probability on both S1
and S2. C: Value-weighted attention score with the query at the end token. D, top : Positive copying
score for the Name Mover Heads. D, bottom : Negative copying score for the Negative Name Mover
Heads. Dashed lines are the average scores for all heads.
3.1 W HICH HEADS DIRECTLY WRITE TO THE OUTPUT ? (N AME MOVER HEADS )
We begin by identifying which attention heads directly affect the model’s output: in other words, the
heads writing in the residual stream at the END position, in a direction that has high dot product with
the logit difference. Formally, let WUdenote the unembedding matrix, LN a layer norm operation
(see Appendix H) and WU[IO],WU[S]the corresponding unembedding vectors for the IOandS
tokens. We searched for heads (i, j)such that
λi,jdef=EX∼pIOI[⟨LN◦hi,j(X), WU[IO]−WU[S]⟩]
had large magnitude. Recall that hi,j(X)is the value that head (i, j)writes into the residual stream
on input X. Therefore, heads with λi,j>0correctly promote the IO token over the S token (on
average). The unembedding projection in (3.1) is called the logit lens and has been used in previous
work to interpret intermediate activations (nostalgebraist, 2020) and parameters (Dar et al., 2022).
We display the values of λi,jin Figure 3 A. We see that only a few heads in the final layers have
large logit projection λi,j. Specifically, 9.6, 9.9, and 10.0 have a large positive score, while 10.7 and
11.10 have a large negative score.
Name Mover Heads. To understand the positive heads, we first study their attention patterns. We
find that they attend strongly to the IO token: the average attention probability of all heads over pIOI
is 0.59. Since attention patterns can be misleading (Jain & Wallace, 2019), we check whether atten-
tion is correlated with the heads’ functionality. We do so by scatter plotting the attention probability
against the logit score ⟨hi(X), WU[IO]⟩. The results are shown in Figure 3 B: higher attention
probability on the IO token is linearly correlated with higher output in the IO direction (correlation
ρ >0.81,N= 500 ). Based on this result, we hypothesize that these heads (i) attend to names and
(ii) copy whatever they attend to. We therefore call these heads Name Mover Heads .
To check that the Name Mover Heads copy names generally, we studied what values are written via
the heads’ OV circuits. We transform the output of the first layer at a name token through the OV
matrix of a Name Mover Head and then project to the logits. The copy score is the proportion of
samples that contain the input name token in the top 5 logits ( N= 1000 ). We find that all three
Name Mover Heads have a copy score above 95% (compared to less than 20% for an average head).
Negative Name Mover Heads. In Figure 3, we also observed two heads strongly writing opposite
theWU[IO]−WU[S]direction. We called these heads Negative Name Mover Heads . Their copy
score is calculated with the negative of their OV matrix. As described in Figure 3, they share all the
properties of Name Mover Heads, except they write in the opposite of names they attend to.
5
Published as a conference paper at ICLR 2023
0 2 4 6 8 1086420Variation in
attention prob.
from end to IO of
Name Movers
−0.06−0.04−0.020
HeadLayerPatching ABC ➡ IOI at S+1
0 2 4 6 8 1086420
Variation in
attention prob.
from end to IO of
Name Movers
−0.1−0.050
HeadLayerPatching ABC ➡ IOI at S2
0 2 4 6 8 1086420Variation in
attention prob.
from end to IO of
Name Movers
HeadLayerPatching ABC ➡ IOI at END
−0.3−0.2−0.10S2 IO
When Mary and John went to the store . John gave a drink toS1 END S+1
ClaraIOI
ABC (S2)
Clara ABC (S1)
Figure 4: The attention probability to IO averaged over three Name Mover Heads is decreased most
by the Previous Token Heads (left), Induction Heads (center) and S-Inhibition Heads (right) when
we patch these attention heads from a sentence with a different S2 name (center and right), or a
different S1 name (left).
3.2 W HICH HEADS AFFECT THE NAME MOVER HEADS ’ATTENTION ? (S-I NHIBITION
HEADS )
Given that the Name Mover Heads are primarily responsible for constructing the output, we ask
why these Name Mover Heads pay preferential attention to the IO token. First, there are two ways
to affect the Name Mover Heads’s attention: through the query vector at the END token or the key
vector at the IO token. Since the key vector appears early in the context, it likely does not contain
much task-specific information, so we focus on the END query vector.
Then, by investigating Name Mover Heads on the ABC distribution (where the three names are
distinct; see Section 2.2), we observed that their attention is not selective: they pay equal attention
to the first two names. We thus ask: what has changed from the ABC distribution to the pIOI
distribution to cause the Name Mover Heads to attend to the IO token preferentially?
To empirically answer this question, we perform a patching experiment, a similar type of causal
intervention as performed in Meng et al. (2022); Vig et al. (2020). As illustrated in Figure 1 this
technique consists of two steps. First we save all activations of the network run on a source sequence.
Then we run the network on a target sequence, replacing some activations with the activations from
the source sequence. We can then measure the behavior of the patched model. Doing this for each
node individually locates the nodes that explain why model behavior is different in the source and
target sequences.
In our case, we run activation patching with source sentences from the ABC distribution and target
sentences from pIOI. We then compute the change in attention probability from END to IO, averaged
over the three Name Mover Heads. Since the Name Mover Heads attention on the IO is high in the
pIOIdistribution and low in ABC, patching at important heads from ABC to pIOIshould decrease
Name Mover Heads attention on IO. The results from patching every head at the END token position
are shown in Figure 4, right. We observe that patching heads 7.3, 7.9, 8.6, 8.10 causes a decrease
in the attention probability on IO, indicating that they are counterfactually important for the Name
Mover Heads’s attention probability on the IO token. We call these heads S-Inhibition Heads .
3.3 W HAT INFORMATION DO THE S-INHIBITION HEADS MOVE ?
How do the S-Inhibition Heads differentiate between IO and S, so they inhibit one but not the
other? We measured their attention pattern and found that they preferentially attend to the S2 token.
We therefore studied what information these heads move from the S2 token position to the END
position. We studied both the properties of the input, and which upstream affect the S-inhibition
heads. Surprisingly, we found that the S-inhibition heads mostly depend on the repetition at the two
positions where the S token occurs (Appendix G).
To study the heads that affect the S-inhibition heads, we ran a patching experiment at S2 from the
ABC distribution to the IOI distribution and measured the variation in Name Mover Heads attention.
The results (Figure 4, center) reveal a large set of heads influencing Name Mover Heads’ attention
6
Published as a conference paper at ICLR 2023
that did not appear at the END position. S-Inhibition Heads must mediate this effect, as they are the
only heads influencing Name Mover Heads at the END position. This reasoning suggests that the
outputs of this set of heads is moved by S-Inhibition Heads from S2 to the END token. When we
analyze the attention patterns of these heads, we see two distinct groups emerge.
Duplicate Token Heads. One group attends from S2 to S1. We call these Duplicate Token Heads on
the hypothesis that they detect duplicate tokens. To validate this, we analyze their attention pattern
on sequences of random tokens (with no semantic meaning), we found that 2 of the 3 Duplicate
Token Heads pay strong attention to a previous occurrence of the current token if it exists (see
Appendix F for more details).
Induction Heads and Previous Token Heads. The other group of heads attends from S2 to S1+1
(the token after the S1 token): the classic attention pattern of an induction head. Previously described
in Elhage et al. (2021), induction heads recognize the general pattern [A] [B] ... [A] and con-
tribute to predicting [B] as the next token. For this, they act in pair with a Previous Token Head.
The Previous Token Head should write information about [A] into the residual stream at [B], so
that the Induction Head can match the next occurrence of [A] to that position (and subsequently
copy[B] to the output).
We therefore seek to identify Previous Token Heads used by our purported Induction Heads. To this
end, we patched activations from a sentence where S1 is replaced by a random name, at the S+1
token index. As shown in figure 4, some heads (and particularly 4.11) appear to influence Name
Mover Heads. Then, by looking at the attention pattern of the most important heads in this patching
experiment, we identified 3 Previous Token Heads. We find that 2 of the 3 Previous Token Heads
and 2 of the 4 Induction Heads demonstrated their expected attention patterns (Appendix F).
3.4 D ID WE MISS ANYTHING ? THESTORY OF THE BACKUP NAME MOVERS HEADS
Each type of head in our circuit has many copies, suggesting that the model implements redundant
behavior. To make sure that we didn’t miss any copies, we knocked out allof the Name Mover Heads
at once. To our surprise, the circuit still worked (only 10% drop in logit difference). In addition,
many heads write along WU[IO]−WU[S]after the knockout, which did not do so previously.
We kept the heads with the largest λi,j, and call them Backup Name Mover Heads . See appendix B
for further details on these heads. Among the height heads identified, we investigated their behavior
before the knockout. We observe diverse behavior: 3 heads show close resemblance to Name Mover
Heads; 3 heads equally attend to IO and S and copy them; 1 head pays more attention to S1 and
copies it; 1 head seems to track and copy subjects of clauses, copying S2 in this case.
4 E XPERIMENTAL VALIDATION
In this section, we check that our circuit provides a good account of GPT-2’s true behavior. In gen-
eral, our introduced criteria depend on a measure Fof the performance of a circuit on a task. In our
case, suppose X∼pIOI, andf(C(X);X)is the logit difference between the IO and S tokens when
the circuit Cis run on the input X. The average logit difference F(C)def=EX∼pIOI[f(C(X);X)]is
a measure of how much a circuit predicts IO rather than S, i.e performs the IOI task.
Firstly, we check that Cisfaithful toM, i.e. that it computes similar outputs. We do so by measuring
|F(M)−F(C)|, and find that it is small: 0.2, or only 6%ofF(M) = 3 .55.
In Section 4.1 we define a running toy example of a model Mfor which faithfulness is not sufficient
to prescribe which circuits explain a behavior defined by a measure Fwell. This motivates the
criteria of completeness and minimality that we then check on our circuit. In addition to the criteria,
we also validated our knowledge of the circuit by designing adversarial examples (see Appendix C).
4.1 C OMPLETENESS
As a running example, suppose a model Muses two similar and disjoint serial circuits (where each
node depends on the previous node) C1andC2. The two sub-circuits are run in parallel before
applying an OR operation to their results. Identifying only one of the circuits is enough to achieve
7
Published as a conference paper at ICLR 2023
Naive circuit completeness tests
−2 0 2 4 6−2−10123456
F(C \ K)F(M \ K)Full circuit completeness tests
−2 0 2 4 6−2−10123456
F(C \ K)F(M \ K)
x=y
Random setGreedy search set
Name mover
S-inhibition
InductionDuplicate token
Previous token
Negative
Empty set
Empty setNegative heads
Figure 5: Plot of points (xK, yK) = (F( M\K),F(C\K))for our circuit (left) and a naive
circuit (right). Each point is for a different choice of K: 50 uniformly randomly chosen K⊆C,
K=∅, and the five Kwith the highest incompleteness score found by greedy optimization. Since
the incompleteness score is |xK−yK|, we show the line y=xfor reference.
faithfulness, but we want explanations that include both C1andC2, since these are both used in the
model.
To solve this problem, we introduce the completeness criterion: for every subset K⊆C, the
incompleteness score |F(C\K)−F(M\K)|should be small. In other words, CandMshould
not just be similar, but remain similar under knockouts.
In our running example, we can show that C1is not complete by setting K=C1. Then C1\Kis
the empty circuit while M\Kstill contains C2. The metric |F(C1\K)−F(M\K)|will be large
because C1\Khas trivial performance while M\Ksuccessfully performs the task.
The criterion of completeness requires a search over exponentially many subsets K⊆C. This is
computationally intractable given the size of our circuit, hence we use three sampling methods to
find examples of Kthat give large incompleteness score:
• The first sampling method chooses subsets K⊆Cuniformly at random.
• The second sampling method set Kto be an entire class of circuit heads G, e.g the Name
Mover Heads. C\Gshould have low performance since it’s missing a key component,
whereas M\Gmight still do well if it has redundant components that fill in for G.
• Thirdly, we greedily optimized Knode-by-node to maximize the incompleteness score (see
appendix K for the detail of the optimization procedure).
These first two methods of sampling Ksuggested to us that our circuit was ε-complete for a small
value of ε. However, the third resulted in sets Kthat had high incompleteness score: up to 3.09. All
such results are found in figure 5, on the left.
4.2 M INIMALITY
A faithful and complete circuit may contain unnecessary components, and so be overly complex. To
avoid this, we should check that each of its nodes vis necessary. This can be evaluated by knocking
out a set of nodes Kand showing that adding back v∈Kto the circuit can significantly recover F.
Formally, the minimality require that for every node v∈Cthere exists a subset K⊆C\ {v}that
has minimality score |F(C\(K∪ {v}))−F(C\K)| ≥A. We call such a circuit A-minimal .
In the running example, C1∪C2isA-minimal for some non-trivial A. We can sketch a proof of this
result given an informal defintion of ‘non-trivial’. To show this, note that if v1∈C1andK=C2,
then the minimality score is equal to |F(C1\ {v1})−F(C1)|which is large since C1is a serial
circuit and so removing v1will destroy the behavior. We then proceed symmetrically for v2∈C2.
In practice, we need to exhibit for every va setKsuch that the minimality score is at least A. For
most heads, removing the class of heads Gthatvis a part of provides a reasonable minimality score.
We describe the sets Kthat are required for them in Appendix J. The importance of individual nodes
is highly variable, but they all have a significant impact on the final metric (at least 3% of the original
logit difference). These results ensure that we did not interpret irrelevant nodes, but do show that
the individual contribution of some single attention heads is small.
8
Published as a conference paper at ICLR 2023
(10, 0)(9, 9)(9, 6)(10, 7)(11, 10)(8, 10)(7, 9)(8, 6)(7, 3)(5, 5)(6, 9)(5, 9)(5, 8)(0, 1)(0, 10)(3, 0)(4, 11)(2, 2)(2, 9)(11, 2)(10, 2)(10, 6)(10, 1)(10, 10)(9, 7)(11, 9)(11, 3) 00.511.522.5
name mover
negative
s2 inhibition
induction
duplicate token
previous token
backup name mover
Attention headChange in
logit difference
Figure 6: Plot of minimality scores |F(C\(K∪ {v}))−F(C\K)|for all components vin our
circuit. The sets Kused for each component, as well as the initial and final values of the logit
difference for each of these vis in Appendix J. Our circuit is 0.06-minimal.
4.3 C OMPARISON WITH A NAIVE CIRCUIT
In order to get a relative sense of the success of our explanation by our criteria, we compare the
results on a na ¨ıve circuit that consists of the Name Mover Heads (but no Backup Name Mover
Heads), S-Inhibition Heads, two Induction Heads, two Duplicate Token Heads and two Previous
Token Heads. This circuit has a faithfulness score 0.1, a score comparable to our circuit’s faithful-
ness score. However, contrary to our circuit, the naive circuit can be easily proven incomplete: by
sampling random sets or by knocking-out by classes, we see that F(M\K)is much higher than
F(C\K)(Figure 5, left). Nonetheless, when we applied the greedy heuristic to optimize for the
incompleteness score, both circuits have similarly large incompleteness scores. Thus, we conclude
that our worst-case completeness criterion was too high a bar, which future work could use as a high
standard to validate circuit discovery.
5 D ISCUSSION
In this work, we isolated, understood and validated a set of attention heads in GPT-2 small composed
in a circuit that identifies indirect objects. Along the way, we discovered interesting structures
emerging from the model internals that complicated the study. For instance, we identified heads
compensating for the loss of function of other heads, and heads contributing negatively to the next-
token prediction. Early results suggest that the latter phenomenon occurs for other tasks beyond IOI
(see Appendix F).
However, our work also has several limitations. First, despite the detailed analysis presented here,
we do not understand several components. Those include the attention patterns of the S-Inhibition
Heads, and the effect of MLPs and layer norms. Second, the number of parameters in GPT-2 small
is orders of magnitude away from state-of-the-art transformer language models. A future challenge
is to scale this approach to these larger models. Thirdly, we only looked at the difference in average
metric (logit difference) between the circuit and the model in order to compare how they both did the
IOI task (Section 4). Looking at the average difference in metric between the circuit and model on
individual examples would be a more stringent way to compare them, but it had too much variability
to help us find a circuit. Fourthly, the definition of the task is limited: we only measure a fraction of
the prediction made by the model, and do not study cases where the model is notperforming IOI.
Finally, more work is needed to validate the structural validation criterion we introduce here.
We hope that our work spurs further efforts in mechanistic explanations of larger language models
computing different natural language tasks, with the eventual goal of understanding full language
model capabilities.
REFERENCES
Boaz Barak, Benjamin L Edelman, Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang.
Hidden progress in deep learning: Sgd learns parities near the computational limit. arXiv preprint
arXiv:2207.08799 , 2022.
9
Published as a conference paper at ICLR 2023
Tolga Bolukbasi, Adam Pearce, Ann Yuan, Andy Coenen, Emily Reif, Fernanda B. Vi ´egas, and
Martin Wattenberg. An interpretability illusion for BERT. CoRR , abs/2104.07143, 2021. URL
https://arxiv.org/abs/2104.07143 .
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhari-
wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agar-
wal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh,
Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma-
teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCan-
dlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot
learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Ad-
vances in Neural Information Processing Systems , volume 33, pp. 1877–1901. Curran Asso-
ciates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/
1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf .
Guy Dar, Mor Geva, Ankit Gupta, and Jonathan Berant. Analyzing transformers in embedding
space. arXiv preprint arXiv:2209.02535 , 2022.
Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann,
Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep
Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt,
Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and
Chris Olah. A mathematical framework for transformer circuits. Transformer Circuits Thread ,
2021. https://transformer-circuits.pub/2021/framework/index.html.
Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, Stuart Shieber, Tal Linzen, and Yonatan
Belinkov. Causal analysis of syntactic agreement mechanisms in neural language models, 2021.
URL https://arxiv.org/abs/2106.06087 .
Atticus Geiger, Hanson Lu, Thomas F Icard, and Christopher Potts. Causal abstractions of neural
networks. In A. Beygelzimer, Y . Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in
Neural Information Processing Systems , 2021. URL https://openreview.net/forum?
id=RmuXDtjDhG .
Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are
key-value memories. arXiv preprint arXiv:2012.14913 , 2020.
Dan Hendrycks and Mantas Mazeika. X-risk analysis for ai research. arXiv , abs/2206.05862, 2022.
Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, and Jacob
Andreas. Natural language descriptions of deep visual features. In International Conference on
Learning Representations , 2021.
Sarthak Jain and Byron C. Wallace. Attention is not Explanation. In Proceedings of the 2019
Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long and Short Papers) , pp. 3543–3556, Minneapolis,
Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1357.
URL https://aclanthology.org/N19-1357 .
Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual
associations in gpt. arXiv preprint arXiv:2202.05262 , 2022.
Jesse Mu and Jacob Andreas. Compositional explanations of neurons. Advances in Neural Informa-
tion Processing Systems , 33:17153–17163, 2020.
Neel Nanda and Tom Lieberum. A mechanistic interpretability analysis of grokking,
2022. URL https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/
a-mechanistic-interpretability-analysis-of-grokking .
nostalgebraist. interpreting gpt: the logit len, 2020. URL https://www.lesswrong.com/
posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens .
Chris Olah. Mechanistic interpretability, variables, and the importance of interpretable bases.
https://www.transformer-circuits.pub/2022/mech-interp-essay , 2022.
10
Published as a conference paper at ICLR 2023
Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter.
Zoom in: An introduction to circuits. Distill , 2020. doi: 10.23915/distill.00024.001.
https://distill.pub/2020/circuits/zoom-in.
Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan,
Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. In-context learning and induction
heads. arXiv preprint arXiv:2209.11895 , 2022.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language
models are unsupervised multitask learners. 2019.
Tilman R ¨auker, Anson Ho, Stephen Casper, and Dylan Hadfield-Menell. Toward transparent ai:
A survey on interpreting the inner structures of deep neural networks, 2022. URL https:
//arxiv.org/abs/2207.13243 .
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Infor-
mation Processing Systems , pp. 5998–6008, 2017.
Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and
Stuart Shieber. Investigating gender bias in language models using causal mediation analysis.
Advances in Neural Information Processing Systems , 33:12388–12401, 2020.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yo-
gatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed Chi, Tatsunori Hashimoto, Oriol
Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models.
ArXiv , abs/2206.07682, 2022.
11
Published as a conference paper at ICLR 2023
A IOI T EMPLATES
We list all the template we used in Table 7. Each name was drawn from a list of 100 English first
names, while the place and the object were chosen among a hand made list of 20 common names.
All the word chosen were one token long to ensure proper sequence alignment computation of the
mean activations.
Templates in pIOI
Then, [B] and [A] went to the [PLACE]. [B] gave a [OBJECT] to [A]
Then, [B] and [A] had a lot of fun at the [PLACE]. [B] gave a [OBJECT] to [A]
Then, [B] and [A] were working at the [PLACE]. [B] decided to give a [OBJECT] to [A]
Then, [B] and [A] were thinking about going to the [PLACE]. [B] wanted to give a [OBJECT] to [A]
Then, [B] and [A] had a long argument, and afterwards [B] said to [A]
After [B] and [A] went to the [PLACE], [B] gave a [OBJECT] to [A]
When [B] and [A] got a [OBJECT] at the [PLACE], [B] decided to give it to [A]
When [B] and [A] got a [OBJECT] at the [PLACE], [B] decided to give the [OBJECT] to [A]
While [B] and [A] were working at the [PLACE], [B] gave a [OBJECT] to [A]
While [B] and [A] were commuting to the [PLACE], [B] gave a [OBJECT] to [A]
After the lunch, [B] and [A] went to the [PLACE]. [B] gave a [OBJECT] to [A]
Afterwards, [B] and [A] went to the [PLACE]. [B] gave a [OBJECT] to [A]
Then, [B] and [A] had a long argument. Afterwards [B] said to [A]
The [PLACE] [B] and [A] went to had a [OBJECT]. [B] gave it to [A]
Friends [B] and [A] found a [OBJECT] at the [PLACE]. [B] gave it to [A]
Figure 7: Templates used in the IOI dataset. All templates in the table fit the ’BABA’ pattern, but
we also use templates that fit the ‘ABBA’ pattern as well (not included for simplicity).
B B ACKUP NAME MOVER HEADS
Here we discuss in more detail the discovery of the Backup Name Mover Heads. As shown in
figure 8, knocking-out the three main Name Mover Heads doesn’t leave the rest of the heads in
a similar state as before. They seem to ”compensate” the loss of function from the Name Mover
Heads such that the logit difference is only 10% lower. We observe that the Negative Name Mover
Heads head write less negatively in the direction of WU[IO]−WU[S], 10.7 even write positively in
this direction afterwards, while other heads that wrote slightly along WU[IO]−WU[S]before the
knock-out becomes the main contributor. Both the reason and the mechanism of this compensation
effect are still unclear, we think that this could be an interesting phenomenon to investigate in future
works. Among those last categories, we identify S-inhibition heads and a set of other head that we
called Backup Name Mover Heads . We arbitrarily chose to keep the height heads that were not part
of any other groups, and wrote in the direction of WU[IO]−WU[S]above the threshold of 0.05.
In figure 9 we analyze the behavior of those newly identified heads with similar techniques as Name
Mover Heads. Those can be grouped in 4 categories.
• 3 heads (10.1, 10.10 and 10.6) that behave similarly as Name Mover Heads according to
their attention pattern, and scatter plots of attention vs dot product of their output with
WU[IO]−WU[S](as 10.10).
• 3 heads (10.2, 11.9, 11.3) that pay equal attention to S1 and IO and wrote both of them (as
10.2 in Figure 9).
• One head, 11.2, that pays more attention to S1 and write preferentially in the direction of
WU[S]
• One head, 9.7, that pays attention to S2 and write negatively.
We did not thoroughly investigate this diversity of behavior, more work can be done to precisely
describe these heads. However, these heads are also the ones with the less individual importance for
the task (as shown by their minimality score in Figure 6). The exact choice of Backup Name Mover
Heads doesn’t change significantly the behavior of the circuit.
12
Published as a conference paper at ICLR 2023
(9, 9)(10, 7)(9, 6)(11, 10)(10, 0)(10, 10)(10, 6)(11, 2)(8, 10)(10, 1)(7, 9)(7, 3)(10, 2)(9, 7)(9, 0)(9, 5)(9, 2)(8, 2)(11, 9)(11, 6)−1−0.500.511.5
backup name mover
negative
s2 inhibition
Nonename mover
HeadProjection on IO-SUnembedding projection of head output
(before name movers KO)Unembedding projection of head output
(after name movers KO)
(10, 10)(11, 10)(10, 6)(10, 2)(10, 1)(11, 2)(8, 10)(7, 9)(10, 7)(7, 3)(11, 9)(11, 3)(9, 7)(9, 0)(11, 11)(11, 6)(9, 5)(9, 2)(10, 3)(8, 2)−0.500.5
Head
Figure 8: Discovery of the Backup Name Mover Heads. After knock-out of the Name Mover Heads
(right) some heads write stronger in the WU[IO]orWU[S]direction than before (left). We also
observed that negative heads seems inhibited by this operation.
Projection of the output along the
name embedding vs attention probabilityValue weighted attention
10.10
11.2
9.7
10.2Backup name
mover head
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
When
Jacob
and
Scott
got
a
drink
at
the
school,
Jacob
decided
giveto
it
to
Scott
Figure 9: Four examples of Backup Name Mover Heads. Left: attention probability vs projection
of the head output along WU[IO]orWU[S]respectively. Right: Attention pattern on a sample
sequence.
13
Published as a conference paper at ICLR 2023
Distribution Logit difference IO probabilityProportion of
S logit greater than IO
pIOI 3.55 0.49 0.7%
Additional occurrence of S
(natural sentence)3.64 0.59 0.4%
Additional occurrence of IO
(natural sentence)1.23 0.36 23.4%
Figure 10: Summary of GPT-2 performance metrics on the IOI task on different datasets. In the line
order: for pIOI, for the dataset where we added an occurrence of S (thus S appears three times in
the sentence) and for the adversarial dataset with duplicated IO in natural sentences. IO probability
refers to the probability the model places on the IO token (computed from the logits).
C D ESIGNING ADVERSARIAL EXAMPLES
As argued in R ¨auker et al. (2022), one way to evaluate the knowledge gained by interpretability work
is to use it for downstream applications as predicting out of distribution behavior. In this section, we
do this by using knowledge of the circuit to construct simple adversarial examples for the model.
As presented in Section 3, the model relies on duplicate detection to differentiate between S and
IO. Motivated by this, we constructed passages where both the S and IO tokens are duplicated. An
example is “John and Mary went to the store. Mary had a good day. John gave a bottle of milk
to”; see Appendix D for full details. We find that this significantly reduces the logit difference and
causes the model to predict S over IO 23% of the time (Figure 10).
To ensure that the observed effect is not an artifact of the additional sentences, we included a control
dataset using the same templates, but where the middle sentence contains S instead of IO. In these
sentences, S appears three times in total and IO only appears once. On this distribution, the model
has an even higher logit difference than on pIOI, and predicts S over IO only 0.4%of the time.
Limitations of the attack. Despite being inspired by our understanding of our circuit, those exam-
ples are simple enough that they could have been found without our circuit with enough effort.
Moreover, we do not have a full understanding of the mechanisms at play in these adversarial exam-
ples. For instance, the S-Inhibition Heads attend not only to S2, but also to the second occurrence of
IO. As this pattern is not present in pIOInor in ABC, it is beyond the analysis presented in Section
3. The study of the behavior of the circuit on these adversarial examples could be a promising area
for future work.
D T EMPLATE FOR ADVERSARIAL EXAMPLES
The design of adversarial examples relies on adding a duplicate IO to the sentences. To this end, we
used a modification of the templates described in appendix A. We added an occurrence of [A] in the
form of a natural sentence, independent of the context. The list of sentence is visible in Figure 11.
[A] had a good day.
[A] was enjoying the situation.
[A] was tired.
[A] enjoyed being with a friend.
[A] was an enthusiast person.
Figure 11: Templates for the natural sentences used in the generation of adversarial examples. The
sentences were chosen to be independent of the context.
E GPT-2 SMALL FULL ARCHITECTURE
Here we define all components of the GPT-2 Architecture, including those we don’t use in the main
text. GPT-2 small has the following hyperparameters:
•N: number of input tokens.
14
Published as a conference paper at ICLR 2023
•V: vocabulary of tokens.
•d: residual stream dimension.
•L: number of layers.
•H: number of heads per layer.
•D: hidden dimension of MLPs
It uses layer norms, the non-linear function
LN(x)def=x−¯xpP
i(xi−¯xi)2, (1)
where the mean and the difference from the mean sum are over the dcomponents of each of the N
tensors.
In GPT-2 the MLPs all have one hidden layer of dimension Dand use the GeLU non-linearity.
We addressed the parametrisation of each attention head in the main text, and cover the technical
details of the WQKandWOVmatrix here: the attention pattern is Ai,j=softmax (xTWi,j
QKx)
where the softmax is taken for each token position, and is unidirectional. We then have hi,j(x)def=
(Ai,j⊗Wi,j
OV).x.
Algorithm 1 GPT-2.
Require: Input tokens T; returns logits for next token.
1:w←One-hot embedding of T
2:x0←WEw(sum of token and position embeddings)
3:fori= 0toLdo
4: yi←0∈RN×d
5: forj= 0toHdo
6: yi←yi+hi,j(xi),the contribution of attention head (i, j)
7: end for
8: y′
i←mi(xi),the contribution of MLP at layer i
9: xi+1←xi+yi+y′
i(update the residual stream)
10:end for
11:return WU◦M◦LN◦xL
F A NALYSIS ON SEQUENCES OF RANDOM TOKENS
We run GPT-2 small on sequences of 100 tokens sampled uniformly at random from GPT-2’s token
vocabulary. Each sequence Awas duplicated to form AA, a sequence twice as long where the first
and second half are identical. On this dataset, we computed three scores from the attention patterns
of the attention heads:
• The duplicate token score: for each token Tiin the second half of a sequence S, we average
the attention probability from Tito its previous occurrence in the first half of S(i.e.Ti−100).
• The previous token score: we averaged the attention probability on the off-diagonal. This
is the attention from the token at position ito position i−1.
• The induction score: the attention probability from Tito the token that comes after the first
occurrence of Ti(i.e.Ti−99)
These three score are depicted in Figure 12 for all attention heads. We can identify 3.0 and 0.1 as
duplicated token heads that also appear in our circuit, 5.5 and 6.9 have high induction score and
were also identified as induction heads in our investigation and 4.11 and 2.2 have a high previous
token score. Note that the heads identified are also the ones that have the highest influence in the
patching experiment shown in Figure 4.
Induction Heads. Olsson et al. (2022) define an Induction Head according to its behavior on re-
peated sequences of random tokens. The attention head must demonstrate two properties. i) Prefix-
matching property. The head attends to [B] from the last [A] on pattern like [A] [B] ... [A] ii)
Copy property. The head contribute positively to the logit of [B] on the pattern [A][B]...[A] .
15
Published as a conference paper at ICLR 2023
Duplicate token attention prob.
on sequences of random tokens
Previous token attention prob.
on sequences of random tokens
Induction score on
sequences of random tokens
Figure 12: Sum of attention probabilities on position determined by the role. Left: duplicate score,
the average attention probability from a token to its previous occurrence. Center: Previous token
attention score, it is the average of the off diagonal attention probability. Right: Induction score.
Average attention probability from the second occurence of [A] to[B] on[A][B]...[A] .
7.29.97.106.99.610.711.910.15.110.011.1010.68.110.1110.105.59.111.510.39.0−15−10−50510152025Average contribution to next token
on repeated sequence
xDot prod. of head output and
next token embeddingName Mover Heads
BackupName Mover Heads
Negative Name Mover Heads
Induction Heads from our circuit
Other induction heads
Others
Figure 13: Contribution to the next token prediction per head on repeated sequences of tokens. The
heads are ordered by decreasing absolute values of contribution. Black contour: heads with attention
patterns demonstrating prefix matching property.
In the IOI task, we identify these heads according to their attention pattern, demonstrating the
pattern-matching property. Here, we investigate their copy property, that is useless in the context of
IOI: outputting the token after S2 is of no interest to identify IO.
As presented above, 5.5 and 6.9 are among the 5 heads with the highest induction score. This
validates their prefix-matching property.
To check their copy property, we computed the dot product ⟨hi(X), WU[B]⟩between the output of
the head hion sequence Xand the embedding of the token [B] on repeated sequences of random
tokens. The results are shown in Figure 13. The two Induction Heads (5.5 and 6.9) appear in the 20
heads contributing the most to the next token prediction. Thus validating their copying property.
We also noticed that the majority of the Negative, Backup and regular Name Mover Heads appear
to write in the next token direction on repeated sequences of random tokens, and Negative Name
Movers Heads contribute negatively. This suggests that these heads are involved beyond the IOI task
to produce next-token prediction relying on contextual information. Moreover, ablating the output
of the three Name Mover Heads by patching their outputs results in a 26% increase in average loss
on the last 99 tokens (from 0.15 to 0.19), showing their importance on tasks outside IOI.
16
Published as a conference paper at ICLR 2023
G D ISENTANGLING FEATURES IN THE OUTPUT OF S-INHIBITION HEADS
In Section 3.2, we discovered that S-Inhibition Heads are responsible for the Name Mover Heads’
specific attention on the IO token. In this appendix, we explore which properties of the input affect
the S-inhibition heads’ outputs.
We present evidence that they were outputting token signals (information about the value of the
token S), positional signals (related to the value of the position S1) and that the latter is the most
important.
To disentangle the two effects, we design a series of counterfactual datasets where only some signals
are present, and some are inverted with respect to the original dataset. We then conducted patching
experiments where the output of S-Inhibition heads are computed from these datasets.
This enables us to quantify in isolation the impact of each signal on the final logit difference.
We constructed six datasets by combining three transformations of the original pIOIdistribution.
•Random name flip : we replace the names from a given sentence with random names,
but we keep the same position for all names. Moreover, each occurrence of a name in
the original sentence is replaced by the same random name. When we patch outputs of
S-Inhibition heads from this sentence, only positional signals are present, the token signals
are unrelated to the names of the original sequence.
•IO↔S1 flip : we swap the position of IO and S1. The output of S-inhibition heads will
contain correct token signals (the subject of the second clause is the same) but inverted
positional signals (because the position of IO and S1 are swapped)
•IO←S2 replacement : we make IO become the subject of the sentence and S the indirect
object. In this dataset, both token signals and positional signals are inverted.
We can also compose these transformations. For instance, we can create a dataset with no token
signals and inverted positional signals by applying IO ↔S1 flip on the dataset with random names.
In total, we can create all six combinations of original, inverted, or uncorrelated token signal with
the original and inverted positional signal.
From each of those six datasets, we patched the output of S-Inhibition heads and measured the logit
difference. The results are presented in Figure 14.
These results can be summarized as the sum of the two effects. Suppose we define the variable Stok
to be 1 if the token signal is the original, 0 when uncorrelated and -1 when inverted. And similarly
Sposto be 1 if the position signal is the original and -1 if inverted. Then the Figure 14 suggests
that the logit difference can be well approximated by 2.31Spos+ 0.99Stok, with a mean error of 7%
relative to the baseline logit difference.
For instance, when both the positional and token signals are inverted, the logit difference is the
opposite of the baseline. This means that the S token is predicted stronger than the IO token, as
strong as IO before patching. In this situation, due to the contradictory information contained in
the output of S-Inhibition heads, the Name Movers attend and copy the S1 token instead of the IO
token (see Figure 15, right). In the intermediate cases where only one of the signals is modified,
we observe a partial effect compared to the fully inverted case (e.g. Figure 15, left). The effect size
depends on the altered signals: positional signals are more important than token signals.
Can we be more specific as to what the token and positional signals are? Unfortunately, we do not
have a complete answer, but see this as one of the most interesting further directions of our work.
We expect that the majority of the positional information is about the relative positional embedding
between S1 and S2 (such pointer arithmetic behavior has already been observed in Olsson et al.
Original positional signal Inverted position signal
Original S token signal 3.55 (baseline) -0.99
Random S token signal 2.45 -1.96
S↔IO inverted token signal 1.77 -3.16
Figure 14: Logit difference after patching S-Inhibition heads from signal-specific datasets. The
effect on logit difference can be decomposed as a sum of the effects of position and token signal.
17
Published as a conference paper at ICLR 2023
IO S S200.10.20.30.40.50.60.70.8Attention on pIOI
Average attention probability
of Name Mover Heads
Attention after patching
S-Inhibition Heads fromIOS2 replacement
IOSS200.10.20.30.40.50.60.70.8
1
Attention on pIOI
Attention after patching
S-Inhibition Heads from
random name flipAverage attention probability
of Name Mover Heads
1
Figure 15: Name Mover Heads’ attention probability before and after patching S-Inhibition Heads
from signal-specific datasets. Left: patching from the dataset generated by random flip of name
(same position signal, random token signal). Right: patching from the dataset generated by IO ←S2
replacement (inverted position signal, inverted token signal). Black bars represent the standard
deviation.
(2022)). When patching in S2 Inhibition outputs from a distribution where prefixes to sentences are
longer (but the distance between S1 and S2 is constant), the logit difference doesn’t change (3.56
before patching vs 3.57 after). This suggests that the positional signal doesn’t depend on the absolute
position of the tokens, as long as the relative position of S1 and S2 stays the same.
H L AYER NORM AND THE RESIDUAL STREAM
The attention heads and MLPs in GPT-2 small write into the residual stream. Suppose x12is the
final state of the residual stream after the 12 layers. This is then converted into logits via WU◦
M◦LN(x12), where LN is defined in Appendix E, Mis the linear transformation of the layer norm
operation and WUis the unembedding matrix.
In order to attribute the extent to which an attention head hwrites in a direction WU[T]where Tis
a token (always IO or S in our case), we can’t simply compute ⟨M◦LN◦hi,j(X), WU[T]⟩, as the
scaling factor that’s used ispP
i(x12,i−x12,i)2. Therefore LN in the main text uses this scaling
factor:
LN(h)def=M◦h−hpP
i(x12,i−x12,i)2(2)
I R OLE OF MLP S IN THE TASK
In the main text, we focused our investigation on attention heads. Since they are the only module
able of moving information across token position – a crucial component of the IOI task – they were
our main subject of interest. However, MLP can still play a significant role in structuring the residual
stream at a given position. We explored this possibility by performing knock-out of the MLP layers
(Figure 16). We observe that MLP0 has a significant influence on logit difference after knock-out
(−100% relative variation) but the other layers don’t seem to play a big role. We hypothesize that
MLP0 can be used to perform low level token processing that latter layers rely on.
Moreover, we also investigated the writing of MLP along the WU[IO]−WU[S]direction. As shown
in Figure 16 (bottom) they write negligibly in this direction compared to attention heads (Figure 3).
18
Published as a conference paper at ICLR 2023
v Class K∪{v} F(C\(K∪ {v}))F(C\K)
(9, 9) Name Mover [(9, 9)] 2.78 3.14
(10, 0) Name Mover [(9, 9), (10, 0)] 2.43 2.78
(9, 6) Name Mover [(9, 9), (10, 0), (9, 6)] 2.77 2.43
(10, 7) Negative Name Mover All Negative Name Mover Heads 5.11 3.84
(11, 10) Negative Name Mover All Negative Name Mover Heads 5.11 4.06
(7, 3) S-Inhibition All S-Inhibition Heads 0.33 1.15
(7, 9) S-Inhibition All S-Inhibition Heads 0.33 1.12
(8, 6) S-Inhibition All S-Inhibition Heads 0.33 1.10
(8, 10) S-Inhibition All S-Inhibition Heads 0.33 0.55
(5, 5) Induction Induction Heads and Negative Heads 1.06 3.95
(5, 8) Induction All Induction Heads 1.06 2.58
(5, 9) Induction All Induction Heads 4.40 5.11
(6, 9) Induction Induction Heads and Negative Heads 4.76 5.11
(0, 1) Duplicate Token All Duplicate Token Heads 1.14 2.52
(0, 10) Duplicate Token All Duplicate Token Heads 1.14 2.29
(3, 0) Duplicate Token All Duplicate Token Heads 1.14 1.65
(2, 2) Previous Token All Previous Token Heads 2.03 2.80
(2, 9) Previous Token All Previous Token Heads 2.03 2.42
(4, 11) Previous Token All Previous Token Heads 2.03 2.27
(10, 10) Backup Name Mover All NMs and previous Backup NMs 2.40 2.63
(10, 2) Backup Name Mover All NMs and previous Backup NMs 0.89 1.09
(11, 2) Backup Name Mover All NMs and previous Backup NMs 0.72 0.89
(10, 6) Backup Name Mover All NMs and previous Backup NMs 2.63 2.77
(10, 1) Backup Name Mover All NMs and previous Backup NMs 1.34 1.47
(9, 7) Backup Name Mover All NMs and previous Backup NMs 0.85 1.02
(11, 9) Backup Name Mover All NMs and previous Backup NMs 1.02 1.13
(11, 3) Backup Name Mover [(9, 9), (10, 0), (9, 6), (10, 10), (11, 3)] 2.53 2.59
Figure 17: Ksets for minimality for each v.
Logit Difference Variation after
knock-out of MLPs at all tokens
0 2 4 6 8 10
−1−0.500.51
Layer
0 2 4 6 8 10Logit diff.
relative variation
Projection of MLP outputs on the
IO-S unembedding
−0.001−0.000500.00050.001Normalized
dot product
Layer
Figure 16: Top: Relative variation in logit difference from knocking out MLP layers. Only MLP0
causes a significative decrease in logit difference after knock-out. Bottom: We measure how much
MLPs write along the WU[IO]−WU[S]direction.
J M INIMALITY SETS
The sets that were found for the minimality tests are listed in Table 17.
19
Published as a conference paper at ICLR 2023
Kfound by greedy optimization
(9, 9), (9, 6), (5, 8), (5, 5), (2, 2), (2, 9)
(9, 9), (11, 10), (10, 7), (8, 6), (5, 8), (4, 11)
(10, 7), (5, 5), (2, 2), (4, 11)
(9, 9), (11, 10), (10, 7), (11, 2), (3, 0), (5, 8), (2, 2)
Figure 18: 4 sets Kfound by the greedy optimization procedure on our circuit.
K G REEDY ALGORITHM
The Algorithm 2 describes the procedure used to sample sets for checking the completeness criteria
using greedy optimization. In practice, because the na ¨ıve and the full circuit are not of the same size,
we chose respectively k= 5andk= 10 to ensure a similar amount of stochasticity in the process.
We run the procedure 10 times and kept the 5 sets with the maximal important incompleteness score
(including the intermediate K).
Algorithm 2 The greedy sampling procedure for sets to validate the completeness citeria.
1:K← ∅
2:foritoNdo
3: Sample a random subset V⊆Cofknodes uniformly.
4: vMAX←arg maxv∈V|F(C\(K∪ {v}))−F(C\K)|
5: K←K∪ {vMAX}
6:end for
7:return K
As visible in Table 18 the sets found by the greedy search contains a combination of nodes from dif-
ferent class. Nonetheless, the overlap between different Ksuggest that we are missing components
from Mthat can take the place of induction heads or S-inhibition Heads when some Name Mover
Heads are knocked-out.
L T ECHNIQUES OVERVIEW
This work involved a variety of techniques that were required to explain model behavior.
•Knockouts :
We used knockouts in two different ways: knocking out singular components of models,
and knocking out everything in the model except particular circuits. The former was some-
what useful, and the latter we found powerful.
–Knockout of single components: as an attribution method, knocking out singular com-
ponents was not always as powerful as techniques such as projections, since the com-
pensation (or backup) nature of Backup Name Mover Heads in this task allowed com-
ponents to be knocked out and their true effect size masked.
–Knockouts of all components except a circuit: on the other hand, knocking out all
components except a circuit enabled us to isolate behaviors in this task where behavior
was sparse, and check the components of our circuit while ignoring the vast percentage
of components of the network, making work manageable.
What was very important for the success of knockout and patching experiments was the
choice of reference distribution for knockout. The analysis in Appendix G shows how
the specific choice of dataset is useful for understanding model components. For a more
general knockout, the OpenWebText dataset, GPT’s training data, can be used. However,
we found that this led to noisier results (though our circuit components still were shown to
be important when we used this ablation).
•Attention pattern analysis :
Using attention patterns to explain behavior is always worrying due to the possibility that
information has accumulated on that token primarily from previous tokens, or that the po-
sition with large attention paid to isn’t actually writing an important value into the residual
stream. In our work however, analyzing attention patterns was generally a necessary first
step before further experiments could be ran, and in this small model, both of the worrying
cases did not generally arise.
20
Published as a conference paper at ICLR 2023
•Patching :
Patching was an important method we used to verify causal explanations that were gener-
ally formed from correlational evidence. In this way our use case is similar to Finlayson
et al. (2021). We were surprised however that in general patching gave clear signal on the
changes in behavior. This may be because we generally patched from inputs like the ABC
distribution (which was successful in knocking out too). Therefore, keeping the context
of the sentence templates may be generally useful. This could be either because the other
words in the templates allow the model to realise that it should be doing IOI, or that in-
troducing inputs from other distributions introduces noise that the model picks up on and
uses, when this is not intended.
21
|
8caf04a7-4dcd-454c-a01f-c2bf17d27317
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Physics is Ultimately Subjective
N.B. A basic and perhaps obvious point that nonetheless I think people get confused about. Despite the somewhat provocative title, my goal is not to say anything new to the average Less Wrong reader, only to emphasize a point and try to explain it so that it can be clearly seen.
Summary: Physics, by which I mean models of how reality works at the most fundamental level, is a subjective endeavor. Physics seems to be objective, but that's because there's high intersubjective consensus about which models best explain and predict reality. Rounding this off to objective causes confusion, and the point generalizes for all seemingly objective things.
Art, and in particular modern art, is highly subjective. Some people are impressed by the artistry of paint splatters, blank canvases, and signed toilets, while others roll their eyes. People like different things and only sometimes agree, so we generally accept that art is subjective.
Assessing a work of art looks sort of like this:
A person sees some art and they subjectively judge it good or bad.
Physics, by contrast, seems totally objective. The world is how it is, and our models of it are good or bad insofar as they accurately and precisely describe the world. Whether or not you like general relativity, for example, has no bearing on whether it's a good theory. All that matters is how well general relativity explains and predicts what we observe.
The picture for physics might look like this:
Physics theories describe reality and are objectively good or bad.
But both of these pictures are wrong! The first leaves out the detail that art has to exist in reality—it's not art first, but atoms first, and it only becomes art when it's observed by someone who thinks of it as art. The second leaves out that theories of physics don't exist on their own, they exist in our heads. To suppose otherwise is to suppose the existence of a paradoxical view from nowhere. So really both pictures are the same picture:
Judgments are medi
|
8cffc4c4-2875-427f-b94e-df8c315223f8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord
Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity" has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. "The Precipice" thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time.
Topics discussed in this episode include:
-An overview of Toby's new book
-What it means to be standing at the precipice and how we got here
-Useful arguments for why existential risk matters
-The risks themselves and their likelihoods
-What we can do to safeguard humanity's potential
You can find the page for this podcast here: https://futureoflife.org/2020/03/31/he-precipice-existential-risk-and-the-future-of-humanity-with-toby-ord/
Transcript:
Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. This episode is with Toby Ord and covers his new book “The Precipice: Existential Risk and the Future of Humanity.” This is a new cornerstone piece in the field of existential risk and I highly recommend this book for all persons of our day and age. I feel this work is absolutely critical reading for living an informed, reflective, and engaged life in our time. And I think even for those well acquainted with this topic area will find much that is both useful and new in this book. Toby offers a plethora of historical and academic context to the problem, ton
|
c6a01977-c007-4f5d-9114-34ea5eea4e3d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Does evolution select for mortality?
At a recent Reddit AMA, Eric Lander, a professor of biology who played an important part in the Human Genome Project, answered this question:
> Do you think immortatility is technically possible for human beings?
His response:
> I don't think immortality is technically possible -- evolution has installed many many mechanisms to ensure that organisms die and make room for the next generation. I bet it is going to be very hard to completely overcome all these mechanisms.
This seems to me, at first blush, to exhibit the Evolution of Species Fairy fallacy. Evolution doesn't work to benefit species, populations, or the "next generation". If a mutation arises that increases longevity, and has no other downsides, then animals with that mutation should become more common in the gene pool, because they die less often. I remember reading that the effect would not be very strong, because most animals don't die of old age. But why would there be the opposite effect?
I am loath to attribute a very basic error to a distinguished professor of biology. Is there another explanation? Is the claim that evolution selects for mortality true?
Note: Eric went on to add:
> I'm also not convinced immortality is such a good idea. A lot of human progress depends on having a new generation with new ideas. Immortality may equal stagnation.
This seems to be blatant rationalization of a preconceived idea that death is good. (I doubt he truly believes that extra progress is worth everybody dying.) So perhaps his first statement is also a form of rationalization. But it seems improbable to me that he would make such a statement about biology if he didn't think it well-founded. More likely there's something I'm misunderstanding.
ETA: one of the first Google results is this page at nature.com, The Evolution of Aging by Daniel Fabian, which goes into some depth on the subject. The bottom line is that it agrees with my expectation that evolution does not select for mortality. Choice quotes:
|
f60c008f-3df4-4832-9ad7-309a5e6dfbdc
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
What is Infra-Bayesianism?
[Infra-Bayesianism](https://www.alignmentforum.org/s/CmrW8fCmSLK7E25sa) tries to solve the problem of [agent foundations](/?state=7782&question=What%20is%20%22agent%20foundations%22%3F). On a high level, we want to have a model of an agent, understand what it means for it to be aligned with us, and produce some desiderata for an artificial general intelligence (AGI) training setup such that it points at aligned AGIs. Without solving that, we’re in a situation analogous to the [rocket alignment problem](https://intelligence.org/2018/10/03/rocket-alignment/): imagine we want to launch a rocket to the Moon, we have lots of explosives, but we don’t have equations for gravity and only have some initial understanding of acceleration. Also, we don’t know where the Moon is.
Infra-Bayesianism tries to construct a realistic model of agents and a mathematical structure that would point at agents aligned with humans, such that these agents could be found by means of gradient descent.
With these goals, the research starts by solving some problems with traditional [reinforcement learning](https://www.alignmentforum.org/tag/reinforcement-learning) (RL) theory: for example, traditional RL agents, being a part of the universe, can't consider the actual universe in the set of their hypotheses in full detail, since they're smaller than the universe; a traditional Bayesian agent would have a hypothesis as a probability distribution over all possible worlds; but it's impossible for an agent made out of blocks in a part of a Minecraft world to assign probabilities to every possible state of the whole Minecraft world.
Infra-Bayesianism is a theory of imprecise probability that solves this problem of non-realizability by considering hypotheses in the form of [convex](https://en.wikipedia.org/wiki/Convex_set) sets of probability distributions; in practice, this means, for example, a hypothesis can be “every odd-positioned bit in the string of bits is 1”. (This is a set of probability distributions over all possible bit strings that only assign positive probabilities to strings that have 1s in odd positions; a mean of any two such probability distributions also doesn’t assign any probability to strings that have a 0 in an odd position, so it’s also from the set, so the set is convex.)
If a problem can be solved, but we can’t specify how we’d solve it given unbounded compute, we’re just confused about it. Going from thinking that chess was impossible for machines to understanding [minimax](https://www.google.com/url?q=https://en.wikipedia.org/wiki/Minimax&sa=D&source=editors&ust=1661633213196096&usg=AOvVaw3m8tD5QAEl-XXhvaH4d1v3) was a really good step forward for designing chess AIs, *even though* *minimax* *is* *completely* *intractable*.
Thus, we should seek to figure out how alignment might look in theory, and then try to bridge the theory-practice gap by making our proposal ever more efficient. The first step along this path is to figure out a universal RL setting that we can place our formal agents in, and then prove regret bounds in.
A key problem in doing this is embeddedness. AIs can't have a perfect self model — this would be like imagining your *entire* brain, inside your brain. There are finite memory constraints. IB allows agents to have abstract models of themselves, and thus works in an embedded setting.
[Infra-Bayesian Physicalism](https://www.alignmentforum.org/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized) (IBP) is an extension of this to reinforcement learning (RL). It allows us to
- Figure out what agents are running (by evaluating the counterfactual where the computation of the agent would output something different, and seeing if the physical universe is different).
- Give a program, classify it as an agent or a non agent, and then find its utility function.
Researcher Vanessa Kosoy uses this formalism to describe [PreDCA](https://www.alignmentforum.org/posts/dPmmuaz9szk26BkmD/vanessa-kosoy-s-shortform#vKw6DB9crncovPxED), an alignment proposal based on IBP. This proposal assumes that an agent is an IBP agent, meaning that it is an RL agent with fuzzy probability distributions (along with some other things). The general outline of this proposal is as follows:
1. Find all of the agents that preceded the AI
1. Discard all of these agents that are powerful / non-human like
1. Find the utility functions in the remaining agents
1. Use combination of all of these utilities as the agent's utility function
Kosoy models an AI as a model-based RL system with a world model, a reward function, and a policy derived from its world model and reward function. [She claims that this avoids the sharp left turn](https://www.alignmentforum.org/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization). The generalization problems come from the world model, but this is dealt with by having an epistemology that doesn't contain [bridge rules](https://www.lesswrong.com/posts/ethRJh2E7mSSjzCay/building-phenomenological-bridges), and so the true world is the simplest explanation for the observed data.
It is open to show that this proposal also solves inner alignment, but there is some chance that it does.
This approach deviates from MIRI's plan, which is to focus on a narrow task to perform the pivotal act, and then add corrigibility. Kosoy’s approach instead tries to directly learn the user's preferences, and optimize those.
|
b8427561-893a-404d-81be-51fadb6201bc
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Lessons learned and review of the AI Safety Nudge Competition
**TL;DR: We ran a competition to encourage people to stop procrastinating on their AI Safety projects. We had 76 applicants out of which over 40% (31 participants) completed their goal by the end of October and were added to a draw to win a monetary prize. The vast majority of the participants that completed their goals found this competition to be useful. We learned that marketing is paramount for successfully running such a competition, that competitions of this kind would potentially be better run at EAG events and that they could be a good way of getting more people to subscribe to EA newsletters.**
**Introduction**
The[AI Safety Nudge Competition](https://forum.effectivealtruism.org/posts/c5SeLNpnHNNif6Doz/announcing-the-ai-safety-nudge-competition-to-help-beat) aims to encourage people to do things related to AI Safety today instead of procrastinating it into the future by allowing them to enter into a draw if they complete the goal that they set for themselves.
Participants defined a specific goal for themselves, examples include:
**•**Finish reading Superintelligence
**•**Finish writing up a relevant blog post
**•**Organise a local dinner for people interested in AI Safety
If they completed the goal they set out for themselves, they were entered into the draw:
**•**ten prizes of $100
**•**two prizes of $100 specifically for Australia and New Zealand
You can see the list of winners here.
**Downside risks**
We started by asking the applicants if they think their project could potentially have downside risks such as:
**•**Outreach to famous people, politicians, the media, children, high-net worth individuals, top AI researchers
**•**Projects that could be controversial, come with significant down-side risks or could produce negative PR
**•**AI Safety projects with high capabilities externalities
Only 1.3% of the applicants were uncertain about the downside risks of their project and the vast majority self-reported to be certain their project doesn’t fall in this category.

**Counterfactual impact**
We continued by asking applicants to rate from a scale from 0 to 10 how likely they thought they were able to achieve their goal by the end of October if they did and did not enter this competition to gauge how strong of a nudge they thought the competition would offer. We made it clear that this was only for informational purposes and that it did not affect their application.

Average: 7.73

Average: 4.21
The average response to how likely they think they would be to achieve their goal by the end of October if they entered the competition is almost 2 times bigger (1.83x) than the average response if they wouldn’t enter the competition.
We asked the same questions in the form they had to complete after they finished their project to be added to the prize draw to compare the results.

Average: 7.03

Average: 4.54
The average response to how likely they think they would have been to achieve their goal by the end of October if they entered the competition is 1.54 times bigger than the average response if they wouldn’t have entered the competition.
These results indicate that the participants that finished their project found this competition to be useful for achieving their goal.
Participants who didn’t finish their project in time were encouraged to fill in a form to provide more details.

Most of the people that filled in that form mentioned procrastination (33.3%) as the main reason they didn’t manage to finish their project in time while the second biggest reason (22.2%) is that they were busier than they expected.
The same questions related to the expected counterfactual impact of entering the competition were posed to the participants that didn’t finish their project.

Average: 5.11

Average: 1.44
The average response to how likely they think they would have been to achieve their goal by the end of October if they entered the competition is over 3 and a half times bigger (3.54x) than the average response if they wouldn’t have entered the competition.
We also asked them how we could improve:
* More frequent reminders, maybe pairing people up for goal-buddies & weekly 15m check-ins -- would have made me more accountable
* Maybe more frequent reminders? Even though I ultimately slowly gave up on my goal, every time I received an email from you, I had a slight boost in motivation.
* I think the single reminder email was good!
**Newsletter**
At the end of the registration form we asked the applicants if they would like to subscribe to our newsletter and 60.5% responded positively.

This indicates that running competitions of this kind could also be a very useful way of getting subscribers to EA newsletters. [[1]](#fnrh3ijor8ybc)
**Marketing**
We underestimated the importance of marketing for a competition of this kind as we announced the competition before we had the marketing materials ready and this resulted in a slow start. As a result of this, we quickly created a poster and a pitch for the competition and shared it in various AI Safety groups on Slack, Facebook, Discord and Twitter, after which more people started applying. Running a competition like this could have a wider impact if it is announced at EAG events or after big EA book launches.
We sent an email in the middle of October and another one a week before the end of October with science-backed productivity tips and a reminder for the participants to complete their project.
**Main lessons**
* Prepare marketing materials before launch
* Announce the competition earlier (for ex newsletters)
* Look for big EA events which could bring in a lot participants
-What We Owe the Future/other big EA books launches
-EAG conferences
* This competition could be scalable and reproducible (create a template)
-Google folder with files that people can copy[[2]](#fnnigxzsw3myr)
-Documents with advice on how to run it
-Possibly an Asana template
* Make an EA forum post on the competition at the end
* If you are a small student group maybe running a competition over the summer could be a cost-effective path to impact
* This type of competition could keep people engaged over the summer break if you don’t have much organizing capacity
* Some variants could possibly be more scalable and reproducible (as they do not rely on the honour system):
-Blog post nudge competition
-Audible/book reading nudge competition
-AGISF nudge competition (ask for the email of the facilitator for confirmation)
**Conclusion**
Overall, the AI Safety Nudge Competition showed promise in nudging people to complete their AI Safety projects but competitions like this could have a wider impact with more focus on marketing. We created a folder with all of the forms, documents and email templates we used so that other people could easily run a competition of this kind. Feel free to contact us if you want access.
1. **[^](#fnrefrh3ijor8ybc)**Some people who indicated that they wish to join our newsletter were already subscribed but forgot that they were subscribed.
2. **[^](#fnrefnigxzsw3myr)**We wish to note that we do have such a folder.
|
cfc66d23-28ba-497d-a1ad-5c7e9d920972
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
S-Curves for Trend Forecasting
**Epistemic Status**: Innovation research and business research is notoriously low quality, and so all the ideas here should be viewed through that lense. What's impressive about the S-curve and evolution trends literature is how remarkably self-consistent it is using a wide variety of research methods. Whether Simon Wardley analyzing news article about different technologies, Clayton Christensen doing case studies of a specific industry, or Carlotta-Perez taking a historical approach of tracking different technologies, the same S-curve pattern and evolution trends seem to show up. This too should be taken into account when evaluating these ideas.
Basics
======
This is an S-curve.
The S-curve is a fundamental pattern that exists in many systems that have positive feedback loops and constraints. The curve speeds up due to the positive feedback loop, then slows down due to the constraints.
When the constraint is broken, the positive feedback loop ramps back up, until it hits another constraint.
*Recommended Resource:* [*Invisible Asymptotes*](https://www.eugenewei.com/blog/2018/5/21/invisible-asymptotes)*, which gives a visceral feel for this process of positive feedback and constraints*
**Common Mistake: Confusing S-Curves With Exponential Growth**
Sometimes, people get confused and call S-curves exponential growth. This isn't necessarily wrong but it can confuse their thinking. They forget that constraints exist and think that there will be exponential growth forever. When slowdowns happen, they think that it's the end of the growth - instead of considering that it may simply be another constraint and the start of another S-Curve. Knowledge of overlapping S-Curves can help you model these situations in a more sophisticated way.
Diffusion S-Curves
==================
The S-curve pattern is quite common in the spread of ideas, practices, and technologies, although it rarely looks quite as pretty. The example below shows "diffusion s-curves" - How a technology spreads through a population (in this case US households
The positive feedback loop in this case is word of mouth, and the constraints represent fundamental barriers to certain market segments or growth such as simplicity, usability, scalability, price, etc.
This creates smaller s-curves around adoption among specific market segments, and larger s-curves that represent the overall market penetration of the idea, practice, or technology.
*Recommended Resource:* [*Wikipedia on Diffusion of Innovation*](https://en.wikipedia.org/wiki/Diffusion_of_innovations)
Evolution S-Curves
==================
In addition to Diffusion S-curves in technology, ideas, and practices, there are Evolution S-Curves. These represent the increase in the traits of these ideas that make them usable in more situations and desirable for more people. When you break through a constraint in one of these properties through innovation, this can often coincide with "unlocking" a new diffusion curve by opening up a new market that wouldn't previously have used your technology or idea.
In this case the positive feedback loop is the increased understanding and expertise that comes from diffusion of a new innovation in your idea or technology, and the constraint represents fundamental assumptions in the idea, practice, or technology that must be changed through another innovation to make the idea, practice, or technology more desirable.
In the example below the desirable property is hardware speed. Fundamental leaps are made to break through a speed constraint, and then iterated on through the positive feedback loop of information and expertise increasing from adoption. This hits diminishing returns as the new innovation is optimized, and then a new fundamental innovation is needed to overcome the next constraint.
*Recommended Resource:* [*Open University on Evolution S-Curves*](https://www.open.edu/openlearn/nature-environment/organisations-environmental-management-and-innovation/content-section-1.7#:~:text=The%20S%2Dcurve%20shows%20the,item%20or%20organisation%20using%20it.)
**Common Mistake: Confusing Diffusion S-Curves with Evolution S-Curves**
Sometimes, I see people make the mistake of assuming that evolution and diffusion s-curves follow the same cycle. Most often, the mistake made here is assuming that when a particular innovation has saturated a certain market, that also means it has "reached its final form" and has no more evolving to do.
There is a related truth - often, an innovation becoming more diffuse will drive innovation as new use cases become apparent. And vice versa, often new innovations will open a new market up by creating use cases that were previously impossible.
However, the two types of curves are driven by two different feedback loops and two different constraints. There's no reason to expect that they will follow each other, and no reason to expect that one curve leveling off will cause the other curve to level off.
S-Curves Patterns
=================
S-curves become quite useful when paired with an understanding of evolutionary patterns. They can allow you to see in a broad sense what's coming next for an idea, practice or technology. They can prevent surprises and give you a tool to stay ahead of changes.
There are patterns that exist for both diffusion and evolution S-curves.
Diffusion Patterns
------------------
Diffusion patterns describe common themes that happen as trends diffuse through a population. They apply on the micro-level to individual population-segments, and on a macro-level to the overall population.
**Diffusion of Innovation**
The diffusion of innovation describes 5 separate stages of a diffusion curve: Innovators, Early Adopters ,Early Majority, Late Majority, and Laggards. By understanding the traits of each of these groups, you can get a broad idea of what to expect, and how to slow or speed up adoption.
*Recommended Resource:* [*Diffusion of Innovations book by Everett Rogers*](https://www.amazon.com/Diffusion-Innovations-5th-Everett-Rogers/dp/0743222091)
**The Chasm**
The Chasm describes a common constraint that occurs in a market segment between "early adopters" - who are willing to put up with a lot, and "early majority", who expect a lot. There is often a number of evolutionary constraints that must be broken through to bridge this single diffusion constraint and many new ideas, practices, and technologies get stuck in the chasm for that reason.
*Recommended Resource:* [*Crossing the Chasm book by Geoffrey Moore*](https://www.amazon.com/Crossing-Chasm-3rd-Disruptive-Mainstream/dp/0062292986/ref=sr_1_1?dchild=1&keywords=crossing+the+chasm&qid=1610411704&s=books&sr=1-1)
**Common Mistake: Assuming a Technology is Irrelevant Because it's Only Useful for a Small Group**
A common mistake that I see is assuming a technology won't have a broader relevance, and using as evidence that it's only used by a small group of relatively abnormal people.
Now, what is true is that not all technologies eventually get adopted by everybody, some stay relatively niche. But it's not very good Bayesian evidence to say that because a technology is used by a small group of weird people, it will not have a broader impact. These diffusion patterns tell us that in fact that MOST technologies that eventually get widespread adoption go through this phase.
Furthermore, they tell us that many of those technologies often get stuck for a while at this early stage because of the Chasm. So even if a technology has staid at this tage for a while (e.g. cryptocurrency), it's still very little evidence towards that technology not being lifechanging in the future. (In contrast, a technology stalling for a long time at some point past the chasm is better evidence that it may have reached saturation)
Evolution Patterns
------------------
Evolution patterns describe common ways that innovations evolve over time to become increasingly desirable. They apply on the micro-level to individual innovations within a trend, and on a macro-level to the evolution of trend as a whole.
**Wardley Evolution**
Innovations tend to go through four stages - the initial prototype, custom built versions, productized versions that compete, than comoditized versions that are all basically the same. By understanding where you are, you can understand the type of competition likely to happen, the types of processes likely to yield improvements, and large changes that will be needed to stick with the market.
*Recommended Resource:* [*Learn Wardley Mapping- Free Resource from Ben Mosior*](https://learnwardleymapping.com/)
**Common Mistake: Not reasoning about likely changes in how the market will be structured.**
A common mistake I see when people reason about the future of e.g. Machine Learning, is that they reason as if the current economic style (how people make money from machine learning) will continue the way it has been.
What Wardley Evolution tells us is rather that it's very frequent for the way a market charges for and makes money with a particular innovation changes, and that change tends to fairly predictable.
For instance, I've seen analysis of Machine learning that assumes it will continue to be productized (which leads to very different dynamics in terms of competitive landscape and strategy between different AI vendors), rather than recognizing that it will eventually be commoditized and become a utility.
**Simplicity - Complexity - Simplicity**
Innovations tend to start out relatively simple as a new approach to a problem. They become increasingly complex to cover more use cases and be more robust, and then become simple again as refinements are made and they're distilled to their essence.
*Recommended Resource:* [*TRIZ for Dummies book by Lilly Haines-Gadd*](https://www.amazon.com/dp/B01CGEK8XG/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1)
**Common Mistake: Assuming a Particular Innovation is a Dead End Because It's Gotten Too Complex**
One mistake I see pretty frequently is people describing a particular innovation, and saying "well, we've added more and more complexity to this and it's gotten increasingly minimal returns so I expect there too not be too much more innovation in this area.
This can be true, but only if there are other indicators that this is already at the end of the innovation curve. Oftentimes, what's actually happened is that it's near the midpoint of it's innovation curve, and the next innovations will be around compressing/simplifying all the things that have been added. This simplification process then allows the innovation to be used a component to build further innovations off of, as it's simple enough to be commoditized.
**Disruptive Innovation**

Sometimes, innovations overshoot the mainstream populations needs on a particular dimension in order to be powerful for a particularly lucrative part of the population. In this case, these innovations or often overtaken by subsequent innovations that lower the performance on that dimension in order to raise it on other dimensions (example: Lower flexibility of a software product but raise the simplicity), these innovations can then "disrupt" the original innovation.
From the perspective a current innovation, the disruptive innovation appears to start below it in the s-curve, but it's able to gain adoption because the particular performance feature of that innovation is already higher than the market needs, and the new product competes on a different performance feature that is not even a target of.
*Recommended Resource:* [*The Innovator's Dillema - Book by Clayton Christensen*](https://www.amazon.com/Innovators-Dilemma-Revolutionary-Change-Business/dp/0062060244)
**Common Mistake: Assuming a Particular Player will Win Because They're Big and Have Lots of Resources**
One understandable assumption to make is that big players with more resources will always win. This isn't necessarily a bad assumption to make - disruptive innovations are much rarer than sustaining innovations.
However, having the disruptive innovation model can help you not make the mistake of just assuming that there's nothing that can topple the current champ - it gives you a clear model of exactly how this happens, and may even point out industries or areas where you're more likely to see this disruption take place.
**Gartner Hype Cycle**
The Gartner Hype Cycle describes a particular way that the media over-inflates people's expectations of new innovations in comparison to how evolved they actually are for a particular market segment's needs.
*Recommended Resource:* [*Mastering the Hype Cycle - Book by Jackie Fenn*](https://www.amazon.com/Mastering-Hype-Cycle-Innovation-Gartner/dp/1422121100/ref=sr_1_1?dchild=1&keywords=mastering+the+hype+cycle&qid=1610412213&s=books&sr=1-1) *(Disclaimer: Haven't read this one, only aware of the Gartner Hype Cycle in passing)*
**Common Mistake: Discounting a Particular Technology Because it Was Overhyped in the Past**
I've frequently seen arguments of the form - "Oh, you think this technology will have a massive impact? That's what they were saying a couple years ago and they massively overpromised.
Like other patterns, this is not saying that there aren't technologies that are massively overhyped and don't pan out. However, knowledge of the Gartner Hype cycle an show you that almost all popular technologies were once overhyped, so the argument of "this technology was overhyped in the past" isn't very good evidence of how transformative it will be. Rather, you'll want to map it against an evolution S-curve to see how overhyped you expect it to be relative to it's current level of evolution.
**Windermere Buying Hierarchy**
The Windermere Buying Hierarchy describes four different improvement focuses that an innovation optimizes over time. First, it's trying to solve for functionality, then reliability, then convenience, and finally price. This loosely maps to the stages of Wardley Evolution.
*Recommended Resource: Haven't found a good one, learned about it through Clayton Christensen's work.*
**Common Mistake: Using Reliability, Convenience or Price as a Reason an Innovation Won't be Successful**
You know the drill by now... it's not that reliability, convenience, or price are never reasons that a technology fails. But you'll want to map these against the evolution S-curves. It's common to see arguments about a technology not being viable because it's too expensive, when the S-curve is still WAYY at the early stage and we wouldn't even have expected the market to start thinknig about price optimization yet.
Only if the market has already reached that point in the S-curve and optimized that trait as much as it could, should you use this as a viable reason why you don't expect the technology to spread further.
Conclusion
==========
S-curves and s-curve patterns are a useful tool for quickly analyzing systems, particularly when looking at diffusion of trends and evolution of innovations. They can heuristically identify solutions and probabilities that would otherwise be quite time consuming to figure out using something like a full system or functional analysis.
Hopefully you find this tool useful in your quest to understand all the things.
|
f1c7fcd8-521b-4754-9fd3-e02c609f3486
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Doxa, Episteme, and Gnosis Revisited
Exactly two years to the day I started writing this post I published Map and Territory's most popular post of all time, "Doxa, Episteme, and Gnosis" (also here on LW). In that post I describe a distinction ancient Greek made between three kinds of knowledge we might translate as hearsay, justified belief, and direct experience, respectively, although if I'm being totally honest I'm nowhere close to being a classics scholar so I probably drew a distinction between the three askew to the one ancient Attic Greeks would have made. Historical accuracy aside, the distinction has proven useful over the past couple years to myself and others, so I thought it was worth revisiting in light of all I have learned in the intervening time.
Nuanced Distinctions
To start, I still draw the categories of doxa, episteme, and gnosis roughly the same as I did before. To quote myself:
> Doxa is what in English we might call hearsay. It’s the stuff you know because someone told you about it. If you know the Earth is round because you read it in a book, that’s doxa.
> Episteme is what we most often mean by “knowledge” in English. It’s the stuff you know because you thought about it and reasoned it out. If you know the Earth is round because you measured shadows at different locations and did the math to prove that the only logical conclusion is that the Earth is round, that’s episteme.
> Gnosis has no good equivalent in English, but the closest we come is when people talk about personal experience because gnosis is the stuff you know because you experienced it. If you know the Earth is round because you traveled all the way around it or observed it from space, that’s gnosis.
There's more nuance to it than that, of course. Doxa, for example, also refers to thoughts, beliefs, ideas, propositions, statements, and words in addition to its connotations of hearsay, common belief, and popular opinion. Episteme, to Plato, was the combination of doxa and logos, contrary to my example above wh
|
1bb174b3-80b1-4881-a427-794da9254778
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Austin meetup notes Nov. 16, 2019: SSC discussion
The following is a writeup (pursuant to Mingyuan's proposal) of the discussion at the Austin LW/SSC Meetup on November 16, 2019, at which we discussed six different SlateStarCodex articles. We meet every Saturday at 1:30pm - if you're in the area, come join us!
You are welcome to use the comments below to continue discussing any of the topics raised here. I also welcome meta-level feedback: How do you like this article format? What sorts of meetups lead to interesting writeups?
Disclaimer: I took pains to make it clear before, during, and after the meetup that I was taking notes for posting on LessWrong later. I do not endorse posting meetup writeups without the knowledge and consent of those present!
The Atomic Bomb Considered As Hungarian High School Science Fair Project
There was a Medium post on John von Neumann, which was discussed on Hacker News, which linked to the aforementioned SSC article on why there were lots of smart people in Budapest 1880-1920.
Who was John von Neumann? - One of the founders of computer science, founder of game theory, nuclear strategist. For all his brilliance he's fairly unknown generally. Everyone who knew him said he was an even quicker thinker than Einstein; but why didn't he achieve as much as Einstein? Perhaps because he died of cancer at 53.
Scott Alexander says: {Ashkenazi Jews are smart. Adaptations can have both up- and down-sides (e.g. sickle cell anemia / malaria resistance); likewise some genes cause genetic disorders and also intelligence. These are common in Ashkenazim.}
Jews were forced into finance because Christians weren't allowed to charge interest on loans, but it turned out interest was really useful.
Scott Alexander says: {And why this time period? Because restrictions on Jews only started being lifted just before this period, and they needed a generation or so to pass before they could be successful. And afterward, Nazis happened. Why Hungary and not Germany? Hungary has a "primate city" (Budapest), i.
|
fe499dc0-5696-4331-a955-a5a8af84b16a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Rules of productivity (mostly links)
Rules of productivity-- summarizes research showing that overtime is destructive (except possibly in short bursts with time allowed for recovery), small undistracted teams do best, etc. Perhaps the most interesting detail for rationality is the idea that tired people can't judge how much their work has deteriorated.
Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency. The title pretty much covers it. A review which covers many points, but doesn't include the book's emphasis on rational deadlines.
|
bd672a79-1c3b-4be2-9a15-8734344e0d44
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Why We Use Money? - A Walrasian View
Knut Wicksell (1851-1926)
> **“Money is a machine whose function is to do quickly and conveniently what would be done, though less quickly and conveniently, without it” — J.S Mill**
>
>
>
---
♥ **An Introduction to an Ignored Problem:**
It is quite common that in undergraduate economics course, particularly in Money and Banking, students are taught that money is adequately defined as a means of general exchange between economic agents and that it is an institution created for the purpose of eliminating the inconveniences of barter.
However, the student rarely asks himself the reason for this, much less observes that the logic given to him is ***wrong***. What does it ultimately mean for currency to be a general medium of exchange? And why is barter a less desirable option than indirect exchange using money?
When justifying their argument about the superiority of monetary transactions over barter, professors usually talk about a problem called ***“double coincidence of wants”***. This problem supposedly inherent to barter develops as follows: In an economy without money, people would have to transact goods of equal value. I produce apples and you produce oranges, so if I want your oranges I simply exchange with you an equivalent of apples for an equivalent of oranges. The problem arises when, due to variations in subjective valuations, you interpret that my number of apples is not worth your number of oranges. A divergence occurs between my desire for your oranges and your desire for my apples. In this case, I would not be able to transact with you and I would not get the goods I want, creating an inefficient situation from a Paretian point of view[[1]](#fn22bvynpx2hq).
With money this problem would be solved, as it is a good accepted by all market agents in exchange for their production. Thus, it is a general mean of exchange (valued) by all market agents. With it, I can exchange my apple production for a money equivalent and then exchange my currency equivalent for its orange production. The trade-offs are satisfied and we then have a Pareto-efficient situation.
However, the economic logic is much more complex when looked at in depth.
The “double coincidence of wants” argument was originally introduced by the English economist William S. Jevons (1835 - 1882) in his 1875 treatise *“Money and the Mechanism of Exchange”*. Jevons noted that in a barter economy not only could a divergence between agents' preferences happens, but also that a highly inefficient complexity would arise in relation to “exchange rates” between different goods. Each good and service would have to be priced in terms of all other goods according to a rule of three that realized the proportion of how much of a given good would be worth in terms of the other goods. We can represent this rule as follows:
(P(n)=N(N−1)2).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Where P(n) is the quantity of prices in a barter economy and N is the number of goods and services in that economy. Thus, in an economy that has 100 goods, for example, there would be a total of 4950 prices or “exchange rates” between one good and the others. According to Jevons, the problem of adjusting each commodity, often indivisible, to its exchange rates could be avoided by choosing a common medium of exchange that unifies all exchange rates, such as a commodity that takes on the role of money[[2]](#fnfg9ghur05cu).
Despite the sophisticated logic used by Jevons, it has a limitation. Note, dear reader, that Jevons and your undergraduate professor talk about a scenario where there are only two people transacting their productions. But what happens when we introduce a *third* person and a third commodity into our hypothetical scenario? What Swedish economist Knut Wicksell (1851 - 1926) observed in doing so is that the double coincidence of desires argument does not hold.
Imagine an economy where there are three people: Peter, Lucas and Gabriel. Each of these people has a capital good or productive factor that allows them to produce different goods. Pedro produces A, Lucas produces B and Gabriel produces C. Even though each individual likes what they produce, they have superior preferences for the goods of other market participants. Pedro wants Lucas's goods, Lucas wants Gabriel's goods, and Gabriel wants Pedro's goods. In this scenario, efficiency cannot be achieved through bilateral exchange gains, but only through multilateral gains; that is, the optimal allocation of resources will only be established when the individuals in question establish a *circular flow of exchange*. The shape of this exchange is known as the Wicksell Triangle:

If the parties have aligned preferences and trust each other, then, after meeting for some initial period of time, they exchange their assets and the optimal allocation is reached. But what would happen if individuals didn't trust each other? This situation could be resolved by creating legal obligations (rights) over each other's production. Assuming that the enforcement of property rights is perfect, this would mean that individuals could transact without fear of opportunism. But what if they had unaligned preferences? What would happen if Gabriel did not want Lucas's products in exchange for his produce and Lucas did not accept Pedro's goods? In this case, the circular flow would be broken and the economy would return to a pareto-inefficient situation[[3]](#fn93odhb3z4u). But is this really necessary?
What Wicksell observed is that such a situation can be resolved if individuals *arbitrate* between their productions. Pedro may not wish to accept the property rights over Gabriel's production C for his own consumption, but he can perfectly retain C to then exchange with Lucas for his production B; production that he values. Thus, by carrying out arbitration between goods, individuals are able to maintain an efficient exchange economy even with barter and without the existence of a currency; given that property titles over production are mere legal representations of fractions of individuals' real production and not a separate commodity used as a general means of exchange. Goods A, B and C (or their title deeds) are all at the same time means of exchange[[4]](#fnexfqp5q19nk).
This is a scalable conclusion. The more people, the less likely a problem of double coincidence of desires will occur. Thus, an economy without currency can be theoretically efficient.
♠ **The Walrasian Medium of Exchange Problem:**
Wicksell's conclusion may seem strange at first glance, but it is just a development of monetary thinking within the general equilibrium perspective. The notion that any commodity could be used as a medium of exchange and that a general medium of exchange would not be necessary for efficient allocation was originally expounded by the great French mathematician Leon Walras (1834 - 1910).
Leon Walras (1834 - 1910)Walras' reasoning is quite simple. The moment a society or group of individuals chooses to use a certain commodity X as a means of exchange, an economic choice problem arises. The total available quantity amount of X for consumption decreases, due to it being simultaneously used as a medium of exchange, this is equivalent to a scenario where the supply of X for consumption is reduced. Since there is less quantity supplied in the market, the relative price of X in relation to other goods increases; which is equivalent to saying that the prices of all goods are *deflated* in relation to the price of division between quantity of consumption, *Qx*, and quantity of medium of exchange, *Mx*, such that there is a relationship or coefficient of choice between the total of X and the quantity variables as follows:
(Qx + Mx = nQt)
Where *n* is the relationship between the total of X and the chosen quantity of X to be a medium of exchange. Rearranging the equation we have:
(Qxn + Mxn = Qt)
However, it is worth remembering that the amount of medium of exchange necessary to maintain the flow of exchange depends on the price and quantities of other goods that agents wish to consume, in such a way that:
(Mx = Qx + QyPy + QzPz...)
Considering that Mx has to be given considering its relationship with the total quantity, we have that:
(Mxn = QyPyn + QzPzn ...)
From this relationship we draw that there is such an equivalence between the quantity of medium of exchange and the quantity of goods that would be purchased with the currency and the quantity of X for consumption that an automatic regulation of the quantity of medium of exchange would occur by *n*. If the quantity of currency is greater than the balance between supply and demand, then Mx would be transformed back into a commodity for consumption while the quantity for consumption had more value than the quantity for the medium of exchange (inflation process) until the price equilibrium between the two was once again established.
Following Walras' logic, money would not be strictly necessary. People could simply adopt means of exchange as needed and, following the reasoning later developed in the Wicksell Triangle, reach an efficient balance. In Walras's thinking there is no argument for why one commodity is adopted as a means of general exchange and another is not. It is simply meaningless to choose a currency. For this reason, the thinking of Walras and Wicksell favors the use of the cheapest possible means of exchange. Using something like gold as a medium of exchange would be inefficient, given that its value for consumption is higher than for a medium of exchange, for example. It is much more rational to use cheaper means of exchange, such as *pieces of painted paper*.
♦ **The Problem of Transaction Costs and the Rationality of Money:**
Although the logic elaborated by Walras and Wicksell makes sense, it leads to absurd conclusions and does not address certain problems that lead to a rational choice to adopt a currency.
The American economist Robert Clower (1926 - 2011) observed that in a world governed by the logic presented by Walras there would be a situation where individuals would maximize utility according to the following restrinction[[5]](#fnyveij3qcw98):
∑ni=1Pi = (diJ − SiJ) + Mj − Mj′
Where diJ represents a desired quantity of goods, SiJ represents the initial quantity of goods, Mj is the initial quantity of medium of exchange, Mj' is the desired quantity and Pi is a price index in terms of the medium of exchange P1,…Pn . What this model ends up postulating is that any change in the quantity of goods or the quantity of medium of exchange will have exactly the same effect; given that both are mutually adjustable. That is, even if Mj' = 0 for all but one individual, this does not imply that individuals in this economy will have no influence on demand, since SiJ enters the restriction equation in the same way as Mj', so that the real goods of the economy would be indistinguishable from the quantities used as a medium of exchange and both would be sources of excess demand in markets. Thus, the conclusion of the Walrasian model is that the increase in the supply of goods has an *inflationary* effect; which is absurd in face of economic data and experience.
What Clower will show is that there must be a commodity chosen as a general medium of exchange that serves as a source of excess demand in the markets. Therefore, there must be money. But why should there be a currency? Clower's argument does not provide us with an explanation of why currencies exist in opposition to the Walrasian argument, only that it has absurd conclusions from a macroeconomic point of view.
The person who would provide a rational choice explanation for the existence of money would be Clower's colleague at UCLA, the brilliant economist Armen Alchian (1914 - 2013). Alchian would demonstrate that money exists primarily as an institution that reduces transaction costs, particularly the information cost involved in looking for people to transact with in terms of the medium of exchange.
Consider the simple direct exchange model of the Wicksell Triangle where there are three goods A, B and C. In a situation where goods are exchanged directly, the resource limitation in such an economy can be analytically expressed by:
0..., − 1..., 1...
Where 0 represents an initial moment where everyone has an initial endowment of goods, -1 represents the moment where the individual exchanges his commodity for a medium of exchange type commodity and 1 is the moment where he exchanges this medium of exchange for another commodity. that it values at a price at least equal to that of the initial commodity. The Walrasian question in such a situation would be: ***Why would such an exchange equilibrium be less preferable than an exchange equilibrium where a commodity acts as a general medium of exchange?***
Suppose that Cab and Cac are the costs of exchanging A for B and A for C respectively. Such costs, naturally, are compounded due to the binary characteristic of buying and selling involved in the circular flow, so that:
Cab = Cav + Cbc
\(Cac\ =\ {\rm Ca}^v\ +\ {\rm Cc}^c\\)
Where Cv is the selling cost and Cc is the purchasing cost. This decomposition leads us to the counterintuitive conclusion that logically:
\(Cac\ <\ Cab\ +\ Cac\ =\ {\rm Ca}^v\ +\ {\rm Cc}^c\ <\ {\rm Ca}^v\ +\ {\rm Cb}^c\ +\ {\rm Cb}^v\ +\ {\rm Cc}^c\\)
Therefore, Walras and Wicksell are right when they say that it makes no sense to argue that a currency should arise from considerations of the costs of buying and selling in exchange relationships between individuals. However, this changes when we analyze the issue from the perspective of transaction costs. To do this, we will consider that P is the probability of an individual wanting to exchange a commodity (be it A, B or C) and that, naturally, there is no correlation in the desires of what the individual intends to buy and what he intends to sell. So, let's say, Pa and Pb could express the probabilities of an individual wanting to exchange A for B. In an exchange economy, a merchant who wants to exchange A for B will have to make a series of encounters such that the number of attempts he will have to do:
\(\frac{1}{PaPb}\\)
In other words, the number of people who really want to exchange A for B. A person who exchanges in an economy of intermediate exchanges using a general means of exchange will have their number of attempts defined by:
\(\ \frac{1}{PaPb}\ +\ \frac{1}{PbPc}\ \\)
In such a way that we logically conclude that:
1PaPb > 1PaPb + 1PbPc
Since Pb > Pa + Pc
In other words, an economy of exchange using currency (general medium of exchange) will be preferable to one of direct exchange as long as the intermediate commodity is better at reducing information costs than other commodities.
In abstract terms: A commodity X will be chosen as the general medium of exchange since **X = max(Pb)**.
Armen Alchian
**BIBLIOGRAPHY:**
— WICKSELL, Knut. Lessons in Political Economy. Abril Cultural, São Paulo, 1985;
— WALRAS, Leon. Compendium of the Elements of Pure Political Economy. Abril Cultural, São Paulo, 1985;
— CLOWER, Robert. A Reconsideration of the Microfoundations of Monetary Theory. Economic Inquiry, vol. 6, no. 1, p. 1-8, 1967;
— ALCHIAN, Armen A. Why Money?. Journal of Money, Credit and Banking, vol. 9, no. 1, p. 133-140, 1977;
—OSTROY, Joseph M.; STARR, Ross M. The Transactions Role of Money. Handbook of monetary economics, vol. 1, p. 3-62, 1990.
1. **[^](#fnref22bvynpx2hq)**The smart student will quickly connect the previous explanation with the [Edgeworth Box](https://en.wikipedia.org/wiki/Edgeworth_box):

2. **[^](#fnreffg9ghur05cu)**The same logic applies to explain why, despite having flexible exchange rate regimes, most countries in the world use a reference currency (the US Dollar) when carrying out international transactions.
3. **[^](#fnref93odhb3z4u)**My colleague Jean Brückener made the observation that this scenario would be maintained or even worsened if we introduced time variation between exchanges. What if agents exchanged their assets in installments? There would be a problem of intertemporal divergence of preferences and, since real interest is given as a function of this intertemporal exchange of real goods, there would also be a distortion in the price of time in this economy. Although the observations make sense, time variation is explicitly disregarded in Wicksell's analyzes and I ended up not going into this dimension of the problem.
4. **[^](#fnrefexfqp5q19nk)**The smart student will observe that this is the exchange mechanism implicit in the Arrow-Debreu General Equilibrium Model
5. **[^](#fnrefyveij3qcw98)**Assuming that: (MaxU(d1J,….,dnJ,Mj/P))
|
01db6b12-0fba-4710-b761-353635af0329
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Information Markets 2: Optimally Shaped Reward Bets
Sequel to Information Markets, which contains a long text outline of what I consider to be the correct alternative to Prediction Markets, which I don't like for a long list of reasons.
This post is intended to fill in the gap in the original regarding what an ideally shaped bet to prove a belief to someone wanting to buy true information efficiently actually looks like.
Aligned Incentives
Suppose the seller has Utilityfunction Us, and the community Uc. Without the sellers information, the market will believe distribution X, and so make decision D. With the sellers information added, the market will believe distribution X′, and so make decision D′.
For the seller to have aligned incentives, we need:
E[Us|X′+D′]>E[Us|X′+D] (if the info is true, you should share it)
E[US|X+D]>E[US|X+D′] (if the info is false, you shouldn't share it).
The difference between these two define a Bias for or against telling the truth, and which may need to be overcome to convince them to share honestly. For now, we assume an unbiased source has no difference in Utility depending on the markets decision, and so that complexity can be skipped over. They can still hava a difference in utility in terms of the effort of the cost of making the claim, or the reward the market offers for telling it information, and we can give them shaped incentives with an Optimally Shaped Bet.
Reshaping Interests
To motivate a seller to communicate information, we want to offer them R(X′;X) in profit for telling us X, if it's actually true. If it's False, we want a loss sufficient to pay someone for correcting it back:
E[Bet(X;X)|X]=0 adding no information gives no payout
E[Bet(X′;X)|X]=−E[Bet(X;X′)|X] a wrong bet must lose enough to correct it back
E[Bet(X′;X)|X′]=R(X′;X)>0 a correct bet gives you expected profit
For our function to be optimal, we need that the seller who believes X′ is best off buying a bet that communicates their sincere beliefs, meaning that:
X′=argmaxXCE[Bet(XC;X)|X′]
We also
|
99bf6524-af29-498c-8b63-9f442ba711ea
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Panel discussion on AI consciousness with Rob Long and Jeff Sebo
Intro
Recent 80k guest and philosopher specializing in AI consciousness Rob Long (@rgb) recently participated in a panel discussion on his paper "Consciousness in Artificial Intelligence: Insights from the Science of Consciousness" (pdf) with co-authors Patrick Butlin, Yoshua Bengio, and Grace Lindsay and moderator Jeff Sebo (@jeffsebo).
You can watch it on Youtube (below), watch/listen as a podcast on Spotify [1], or read the transcript[2] below.
Paper abstract
> Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.
Youtube description
This event took place on Tuesday September 5, 2023 and was hosted by the NYU Mind, Ethics, and Policy Program.
About the event
This panel discussion featured four authors from the recently released and widely discussed AI consciousness report. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of the best-supported neuroscientific theories of consciousness. The paper su
|
10de1785-9fdc-4f59-84a6-b20c82f20bf2
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Reducing long-term risks from malevolent actors
Reducing long-term risks from malevolent actors
===============================================
7 July 2020
by [David Althaus](https://longtermrisk.org/author/david-althaus/ "Posts by David Althaus") and [Tobias Baumann](https://longtermrisk.org/author/tobias-baumann/ "Posts by Tobias Baumann")
### Summary
\* Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history.
\* Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors.
\* Malevolent humans with access to advanced technology—such as whole brain emulation or other forms of transformative AI—could cause serious existential risks and suffering risks.
\* We therefore consider interventions to reduce the expected influence of malevolent humans on the long-term future.
\* The development of manipulation-proof measures of malevolence seems valuable, since they could be used to screen for malevolent humans in high-impact settings, such as heads of government or CEOs.
\* We also explore possible future technologies that may offer unprecedented leverage to mitigate against malevolent traits.
\* Selecting against psychopathic and sadistic tendencies in genetically enhanced, highly intelligent humans might be particularly important. However, risks of unintended negative consequences must be handled with extreme caution.
\* We argue that further work on reducing malevolence would be valuable from many moral perspectives and constitutes a promising focus area for longtermist EAs.
\*\*Full article\*\*
\* [PDF](https://longtermrisk.org/files/Reducing\_long\_term\_risks\_from\_malevolent\_actors.pdf)
\* [EA Forum post](https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors)
|
0d6890f5-8ecc-4c7d-946f-e62de7a744c7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Given the Restrict Act, Don’t Ban TikTok
While others have already written similar posts, given my previous position it seems necessary for me to do so as well – the Restrict Act is a no good very bad bill, and having seen the bill I realize that I was wrong to support banning TikTok.
For a while I have been in favor of banning TikTok, if it does not divest its Chinese ownership and modify the software to stop collecting outside user data. TikTok is Chinese spyware. Letting it be installed on most phones, given the data it is collecting, is not something we can or should abide.
Then someone actually proposed a bill, S 686 or the Restrict Act. I was reminded why it is almost never a good idea to ban things. Rather than a narrow bill to allow the banning of TikTok, we got a bill that vastly expands government powers, a Patriot Act for the Internet.
This is why we cannot have nice things. These people clearly cannot be trusted to regulate such matters.
So I’m admitting I was wrong. TikTok is still Chinese spyware, it is still not a good use of your time, you personally should not use it.
If this is how things are, however, then that’s where this ends. Don’t ban TikTok.
TIKTOK IS CHINESE SPYWARE
TikTok is Chinese Spyware. It can’t read your texts and emails directly, but it can do a lot of other things short of that. If we have the ability to ban TikTok without vastly destroying our civil liberties in general, we should ban TikTok.
Noah Smith agrees.
> Spying is the most commonly cited reason for banning TikTok, because it’s the easiest to prove. Tiktok has admitted tracking journalists’ physical movements and sending the data to its Chinese parent company. But physical location is probably only the tip of the iceberg of the data TikTok can collect, which includes faceprints, voiceprints, browsing history, text messages, and pretty much anything you do on your phone. And as Ben Thompson wrote back in 2020, that information basically becomes the property of the Chinese Communist Party.
>
> [From Ben]:
|
edc66493-9fec-494b-92d5-51623d2b7a32
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How much white collar work could be automated using existing ML models?
Assume that progress in AI halts right now and only the currently published breakthroughs are available. GPT-3, PaLM, Flamingo for text generation and Imagen, Dall-E 2 for image generation.
Let's say these models all become available via paid APIs (which also allow fine tuning on custom data), and we let a few years pass for startups to create nice plugins and interfaces for non-technical people to use these models in their work. This type of tooling becomes available abundantly, with competition pushing prices down to some small multiple of the compute cost to run the models.
A copywriter has a paid subscription to a GPT-3/Palm plugin. A customer support agent has access to a fine-tuned version of a question-answering PaLM. A designer has a subscription to Imagen/Dall-E 2 built into their software for generating ideas to build on or use for complex in-painting.
How much should we expect global white collar productivity to increase just from letting these existing models spread their impact across the economy? I'm looking for a high-level estimate — could we potentially automate 1% of all office-hours (or conversely, increase productivity by ~1%), or 10% or 30%?
My intuition
To me it looks like ML has gone through a step change recently where the economic impact has gone from small, but meaningful to potentially very large. Image and text models a few years ago were impressive, but the relevant use-cases were mostly in improving backend-type workflows like search, recommendations or targeted ads. Now, ML model capabilities seem powerful enough to start being visible in GDP growth rates, albeit with a fairly long time-delay to allow for building tooling.
I suspect the progress that a few teams of ~1000 people have made over the last ~2 years may have by themselves, without further fundamental tech improvements, increased global GDP by multiple percentage points (although this will take a few years to fully materialise).
|
5d7fb0d1-b396-476e-8f59-d0a4c5277bee
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Populectomy.ai
A long-form iteration of my "AI will lead to massive violent population reduction" argument: https://populectomy.ai
The host name, populectomy, is my attempt at naming the described outcome, a name that I hope to be workable (sufficiently evocative and pithy, without being glib). Otherwise I'm out 150 USD for the domain registration, ai domains come at a premium.
I've mimicked the paper-as-website model with <bad-outcome>.ai domain name used by @Jan_Kulveit , @Raymond D @Nora_Ammann , @Deger Turan , David Krueger, and @David Duvenaud for Gradual Disempowerment. Mimicry being the highest form of flattery and what-not. Nice serif font styling like theirs is on my wish list.
Here's my previous post on the topic.
A few words may shortcut reading the whole thing, especially if you've read the previous post:
1. In "Shell Games and Flinches" @Jan_Kulveit provides a "shortest useful summary" to the Gradual Disemplowerment paper's core argument:
"To the extent human civilization is human-aligned, most of the reason for the alignment is that humans are extremely useful to various social systems like the economy, and states, or as substrate of cultural evolution. When human cognition ceases to be useful, we should expect these systems to become less aligned, leading to human disempowerment."
I basically agree with that statement. However, I think it is effectively trumped by this one, the shortest useful summary of Populectomy: Human civilization is a system of large-scale human cooperation made possible by the fact that killing many humans requires many other willing human collaborators who don't want to themselves be killed, making cooperation better than elimination. When human allies ceases to be necessary for the elimination of human rivals, we should expect (mass) human civilization to cease.
2. The conceit of humanity as a shared project is very useful for maintaining human cooperation. However, I think it encourages a blind spot when big que
|
0efdad66-a1e0-4398-bea8-b509195623fb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Which is the the article that describes drawing the map of the city?
I've searched a few related terms in Less Wrong, and I can't seem to find the article I'm looking for. It's an Eliezer post that describes a process of drawing a map of a city, and how you can't do it with the blinds closed, you have to actually go out and look at the city.
I've found the wiki definition page, and I've found articles that look like they refer to the original in the way Eliezer often refers to his earlier metaphors, but I could have sworn there was a shortish article that specifically introduced it.
(Is it possible that my map is simply wrong and the original parable was only a small portion of a larger post?)
|
d9f27eec-5456-4e4a-a65a-d14262b1c60e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
New York Meetup
I’ll be in New York City this Tuesday evening 7-11p, at 60 West 23rd Street, Apt. 904. Robin Hanson of Overcoming Bias and Eliezer Yudkowsky of LessWrong will also be there. Please join us!
|
9a3bc814-4cdb-4b0d-888e-97e7eabf4a1b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Inner Alignment via Superpowers
Produced As Part Of The SERI ML Alignment Theory Scholars Program 2022 Under John Wentworth
The Problem
When we train RL agents, they have many opportunities to see what makes actions useful (they have to locate obstacles, navigate around walls, navigate through narrow openings etc.) but they can only learn what they should actually care about from how the goal appears in training. When deployed, their capabilities often generalize just fine, but their goals don't generalize as intended. This is called goal misgeneralization.
Usually we conceptualize robustness as 1-dimensional, but to talk about goal misgeneralization, we need to use vlad_m's 2-dimensional model:
1D Robustness above; 2D Robustness below, with the Line of Doom in grey. Source.
“There’s an easy solution to this,” you might say. “Just present a whole boatload of environments where the goals vary along every axis, then they have to learn the right goal!”
“Our sweet summer child,” we respond, “if only it were so simple.” Remember, we need to scale this beyond simple gridworlds and Atari environments, where we can just change coin position and gem color, we’re going all the way to AGI (whether we like it or not). Can we really manually generate training data that teaches the AGI what human values are? We need a method that’ll be robust to huge distribution shifts, things we aren't able to even think of. We need a method that’ll allow this AGI to find what humans value. We need superpowers!
Proposed Solution
Our solution is ‘giving the AI superpowers.’
Oh, that's not clear enough?
Alright then: during training, we occasionally let the RL agent access an expanded action-space. This lets it act without the restrictions of its current abilities. We also encourage it to explore states where it’s uncertain about whether it’ll get reward or not. The aim is that these ‘superpowers’ will let the AI itself narrow down what goals it ought to learn, so that we won’t need to be as certain we’ve covered ever
|
48f48840-7fd3-4d25-a540-02498b267f87
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Distillation Experiment: Chunk-Knitting
Summary
Here, I show the output of a protocol to break down information that exceeds typical working memory limits into chunks. The goal is to enhance understanding at first reading, both for the person breaking down the information and for the person reading the result. The original text, an explanation of how RNA interference works, can be found under "Text Of The Original Description." Readers who want to skip straight to the result should go to the section "Result of Chunk-Knitting Procedure." The goal is to produce a systematic way of building a composite understanding from complex atomic information. I call this procedure "Chunk-Knitting."
Note: I also expand some acronyms and bold key terms the first time they're introduced in the Chunk-Knitting output, but not in the original text.
Introduction and epistemic status
How can we deal with working memory limitations when presenting complex topics? Often, there are too many individual parts to remember easily. Yet the fascination of understanding how they fit together depends on first mastering these atomic parts. It is hard to maintain focus. It seems that learning may be bottlenecked by the student's ability to get through this period of internalizing atomic parts and their individual relationships. The payoff comes at the end, when the student understands how they all fit together into a powerful integrated whole. This should motivate us to find new ways to present information that minimize the memory and attention limitations that students face at the beginning of this process.
We know that spaced repetition, breaking information into chunks, and building connections between atomic concepts are all useful ways to improve memory and integrative understanding. Chunk-Knitting is an attempt to leverage these three tools to systematically create pedagogical text that is easier to understand the first time the reader encounters the ideas it is conveying. This post is a first attempt to apply Chunk-Knitting. The
|
584ea1f4-3c43-402e-9867-f1fbea5b8f25
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Reflections on a no content January 2020 experiment
In January 2020 I did a zero content life. This was partly justified by the book “the info diet” but mostly based on a philosophy that proposes,
> in a lifetime with limited hours, there is a choice to either consume or create. And I’d rather create than consume.
To be honest, the idea for me was born in response to a meme complaining that if you think art should be free, try going without art for a month. This sounded like a fun and interesting idea.
My main sources of content were:
* Facebook feed
* Books (paper and TTS audiobook)
* Some music
* Youtube videos
I decided to not get fussy about content that I was actively reciprocal in creating. For example a dance form and a conversation are two different types of content that I am engaged in. The difference would be between playing sport and watching sport (I’m allowed to play sport but not watch sport). I wanted to make an exception for live music but I would not usually see live music anyway.
So how did I go?
On the 1st of January I rearranged my phone screen to make my content less accessible than my creation pathways. I don’t think I recorded anything, but in the first week I wrote 5000 words with all that time I had. In the silence I noticed my mind go quieter. In the time that I spent not reading, I did thinking. I drove places in silence. I started having phone calls with my friends. I just let myself go with whatever I wanted to go towards.
Without music coming in I started to get earworms appearing in my mind. Without content ideas coming from books I had to start generating my own, or applying my existing methods. Even making my own.
I didn’t realise how badly an entrenched habit of reading (134 books in 2019) could limit my growth. I was doing a good thing adding more information to the parts of me that needed more information and also, now that I’ve slowed down, I’m more balanced.
I probably only needed to do book free for a day to get the benefit but I committed and I wasn’t sure if ther
|
c0285c0e-bec2-421e-8252-1133550f6450
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Be careful with thought experiments
Thagard (2012) contains a nicely compact passage on thought experiments:
> Grisdale’s (2010) discussion of modern conceptions of water refutes a highly influential thought experiment that the meaning of water is largely a matter of reference to the world rather than mental representation. Putnam (1975) invited people to consider a planet, Twin Earth, that is a near duplicate of our own. The only difference is that on Twin Earth water is a more complicated substance XYZ rather than H2O. Water on Twin Earth is imagined to be indistinguishable from H2O, so people have the same mental representation of it. Nevertheless, according to Putnam, the meaning of the concept water on Twin Earth is different because it refers to XYZ rather than H2O. Putnam’s famous conclusion is that “meaning just ain’t in the head.”
>
> The apparent conceivability of Twin Earth as identical to Earth except for the different constitution of water depends on ignorance of chemistry. As Grisdale (2010) documents, even a slight change in the chemical constitution of water produces dramatic changes in its effects. If normal hydrogen is replaced by different isotopes, deuterium or tritium, the water molecule markedly changes its chemical properties. Life would be impossible if H2O were replaced by heavy water, D2O or T2O; and compounds made of elements different from hydrogen and oxygen would be even more different in their properties. Hence Putnam’s thought experiment is scientifically incoherent: If water were not H2O, Twin Earth would not be at all like Earth. [See also Universal Fire. --Luke]
>
> This incoherence should serve as a warning to philosophers who try to base theories on thought experiments, a practice I have criticized in relation to concepts of mind (Thagard, 2010a, ch. 2). Some philosophers have thought that the nonmaterial nature of consciousness is shown by their ability to imagine beings (zombies) who are physically just like people but who lack consciousness. It is entirely li
|
dda5a779-70ff-4d65-8f22-0d43de8f661d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What should you change in response to an "emergency"? And AI risk
This post has been recorded as part of the LessWrong Curated Podcast, and can be listened to on Spotify, Apple Podcasts, and Libsyn.
----------------------------------------
Related to: Slack gives you the ability to notice/reflect on subtle things
Epistemic status: A possibly annoying mixture of straightforward reasoning and hard-to-justify personal opinions.
It is often stated (with some justification, IMO) that AI risk is an “emergency.” Various people have explained to me that they put various parts of their normal life’s functioning on hold on account of AI being an “emergency.” In the interest of people doing this sanely and not confusedly, I’d like to take a step back and seek principles around what kinds of changes a person might want to make in an “emergency” of different sorts.
Principle 1: It matters what time-scale the emergency is on
There are plenty of ways we can temporarily increase productivity on some narrow task or other, at the cost of our longer-term resources. For example:
* Skipping meals
* Skipping sleep
* Ceasing to clean the house or to exercise
* Accumulating credit card debt
* Calling in favors from friends
* Skipping leisure time
If I would strongly prefer to address some situation x before time t, I may sometimes want to "borrow from the future" like this. But the time-scales matter. If I’m trying to address x as much as possible in the next five hours, skipping sleep may make sense. If I’m trying to address x as much as possible over the next year, I’ll probably do better to get my usual amount of sleep tonight. Something similar (with different, resource-specific timescales) will hold for other resources.
So, in short time-scale emergencies, it’ll often make sense to suspend a great deal of normal functioning for a short period of time. In longer time-scale emergencies, your life should mostly look closer to normal.
Principle 2: It matters how much we know how to address the emergency
Much of what we do in da
|
f3938f55-b711-4400-ab7d-991af04115a1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Links for Feb 2021
Note: some of the formatting was lost, see substack for full experience
Favorites:
https://amp.cnn.com/cnn/2021/02/19/health/what-is-simp-teen-slang-wellness/index.html I’ve finally gotten a third article in my collection of classic Gen Z journalism. For reference, here are the first two articles in my collection: https://www.theverge.com/2018/9/23/17882996/teens-electric-scooter-age-requirement-bird-lime and https://www.theatlantic.com/technology/archive/2019/06/why-teens-try-airdrop-you-memes-concerts/591064/. Every sentence in all of these articles is quotable so just read them all.
This is the first time I’ve been personally scared of AI now that it’s coming after my own grift: trollish thought experiments (click through to see all 10)
Tomer Ullman @TomerUllman
I had an AI (GPT3) generate 10 "thought experiments" (based on classic ones as input), and asked @WhiteBoardG to sketch them.
February 22nd 2021
1,344 Retweets4,275 Likes
https://beta.nsf.gov/science-matters/scientists-break-through-wall-sleep-untapped-world-dreams “researchers achieve two-way communication with lucidly dreaming people” If this checks out, it could be one of the biggest science stories of the year.
Dr Adam Rutherford @AdamRutherford
Well this is simply the most astonishing discovery that I can recall. A bacteria that photosynthesises from INFRARED LIGHT FROM A DEEP SEA HYDROTHERMAL VENT. pnas.org/content/102/26…
February 27th 2021
993 Retweets3,459 Likes
A note on the physics section: I don’t read all the links claiming a revolutionary breakthrough. I’m posting them because I think it’s interesting that there seems to be a few dozen revolutionary breakthroughs in physics each month, plus eventually when we figure out which one of them was actually a breakthrough I’ll be able to point back at it and look prescient. I’ve put the links I actually find interesting in the beginning of the section.
Physics:
https://arxiv.org/abs/2102.01522 Robin Hanson is out with a paper o
|
e3ffd64c-812f-448d-aef9-8c3a512787d8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Intuitive self-models] 7. Hearing Voices, and Other Hallucinations
7.1 Post summary / Table of contents
Part of the Intuitive Self-Models series.
The main thrust of this post is an opinionated discussion of the book The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes (1976) in §7.4, and then some discussion of how hallucinations and delusions arise from schizophrenia and mania (§7.5), and from psychotic depression and BPD (§7.6).
But first—what exactly do hallucinations have to do with intuitive self-models? A whole lot, it turns out!
We’ve seen throughout this series that different states of consciousness can have overlapping mechanisms. Deep trance states (Post 4) mechanistically have a lot in common with Dissociative Identity Disorder (Post 5). Lucid trance states (Post 4) mechanistically have a lot in common with tulpas (§4.4.1.2), and tulpas in turn shade into the kinds of hallucinations that I’ll be discussing in this post. Indeed, if you’ve been paying close attention to the series so far, you might even be feeling some crumbling of the seemingly bedrock-deep wall separating everyday free will (where decisions seem to be caused by the self, or more specifically, the “homunculus” of Post 3) from trance (where decisions seem to be caused by a spirit or hypnotist). After all, an imagined spirit does not veridically (§1.3.2) correspond to anything objective in the real world—but then, neither does the homunculus (§3.6)!
I’ll extend that theme in Section 7.2 by arguing against a bright-line distinction between everyday inner speech and imagination (which seem to be caused by the homunculus) versus hallucinations (which seem to appear spontaneously, or perhaps to be caused by God, etc.). This is an important distinction “in the map”, but I’ll argue that it doesn’t always correspond to an important distinction “in the territory” (§1.3.2).
Section 7.3 then offers a framework, rooted in probabilistic inference (§1.2), for thinking about how culture and an individual’s psychology can each influence
|
107d68c9-da31-460c-9177-caced35cbedf
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Stackelberg Games and Cooperative Commitment: My Thoughts and Reflections on a 2-Month Research Project
*Over the past two months I have been lucky enough to carry out a part-time research project as part of* [*this*](https://forum.effectivealtruism.org/posts/dKgWZ8GMNkXfRwjqH/seeking-social-science-students-collaborators-interested-in) *program with Vael Gates. This post serves both as a short summary of what I have found, my views on these findings, as well as a reflection on the project process as a whole. Many thanks to Vael for their support and guidance with this project.*
As the title suggests, in this project I focused on the application of dynamic game theory, in particular Stackelberg games, to the problem of cooperative commitment in multi-agent scenarios. This was motivated and inspired by the section on Commitment in Dafoe *et al.*'s *'*[*Open Problems in Cooperative AI*](https://www.cooperativeai.com/open-problems)*'*.
Summary of the literature
=========================
Open Problems in Cooperative AI
-------------------------------
This is a broad article that covers a lot of ground in its attempt to synthesise relevant research topics from computer science, economics, cognitive science etc. into a coherent description of the topic of cooperative AI, that is; *'AI research trying to help individuals, humans and machines, find ways to improve their joint welfare.'*As part of this aim, Dafoe *et al.* classify four *'cooperative opportunities'* as a way of analysing cooperative scenarios in terms of their core characteristics of: whether the individuals have common or conflicting interests, whether the individuals are humans, machines, or organisations, whether the scenario is representative of the *'individual perspective'* or *'planner perspective'*, and the *'scope'* of the scenario. Furthermore, four crucial *'cooperative capabilities'* are also identified for their relevance for promoting cooperation. These are: understanding, communication, commitment, and institutions.
I chose to explore the third of these capabilities, *'commitment'*, which Dafoe *et al.* define as the ability to overcome *'commitment problems'*, that is, *'the inability to make credible threats or promises'*. They point to cases in the social sciences (such as war and conflict) where it has been argued that the inability to cooperate to mutual benefit stems from the inability to make credible commitments. Society has come up with a number of solutions (ranging from hard to soft) to such commitment problems including contracts (hard) and norms and reputation systems (soft).
Dafoe *et al.* also make explicit the connection to game theory through an example relating to the Prisoner's Dilemma. If one player could make a credible, observable commitment to play C *if and only if* the opponent does, then the dilemma is solved; it is strictly better for the opponent to play C over D.
| | | |
| --- | --- | --- |
| | C | D |
| C | 1, 1 | -0.5, 1.5 |
| D | 1.5, -0.5 | 0, 0 |
In the main body of this section on commitment, Dafoe *et al.* discuss and classify solutions - theoretical ways of solving commitment problems - as well as commitment *devices* - methods through which solutions can be implemented in practice.
Commitment solutions can be classified according to two binary classifications; whether the solution is unilateral or multilateral, and whether the solution is conditional on other factors or not. This gives four categories:
* Unilateral unconditional solutions are those whereby an actor is able to individually and irreversibly commit to a certain course of action. The simplest way of doing this is just by taking a hard-to-reverse action.
* Unilateral conditional solutions are those where a single actor commits to a course of action *conditional* on internal or external factors such as the actions of an adversary or the state of the wider environment. Conditional solutions can generally be more powerful than unconditional solutions, but this comes at a price of being harder to implement.
* Multilateral unconditional solutions are those whereby a group of actors agree to strictly commit to certain actions. The success of commitment is dependent on the participation of all parties.
* Finally, multilateral conditional solutions (as one would expect by now) are those whereby a group of actors make a commitment that depends on some external state.
Finally is the discussion of devices for implementing solutions. These range on a continuous scale of 'hardness' depending on how strictly commitment is enforced. At one extreme, the 'hardest' devices make reneging on a commitment physically or logically impossible to carry out, whereas much weaker devices could come in the form of social norms whereby not following through on a commitment may not have explicit costs or consequences, but could otherwise harm the agent through e.g., a loss of reputation making collaboration more difficult in the future. In between these two examples, one could find devices such as legal contracts that aim to impose costs on breaking agreements.
Stackelberg Games
-----------------
A [Stackelberg game](https://en.wikipedia.org/wiki/Stackelberg_competition) (or Stackelberg competition) is a form of dynamic game introduced by Heinrich Freiherr von Stackelberg in 1934. In the classical formulation a two-player game is between a *Leader* and a *Follower* (though extensions have been made with multiple followers) whereby the Leader plays first, selecting her strategy with the knowledge that the Follower is able to observe her choice. The Follower then selects her strategy given the knowledge of the Leader's. As an example of how such a dynamic game setup can lead to differing play when compared to standard simultaneous-move game, consider the game given below (From [Conitzer & Sandholm (2006)](https://dl.acm.org/doi/10.1145/1134707.1134717)).
| | | |
| --- | --- | --- |
| | C | D |
| A | 2, 1 | 4, 0 |
| B | 1, 0 | 3, 2 |
Note that, for the row player, playing A strictly dominates B so in a simultaneous-move situation the optimal strategy would be to always play A. However, if we now take the row player to be a Leader in a Stackelberg game, it can be shown that the *Stackelberg optimal*strategy for the leader is a mixed strategy of A and B each with probability 0.5. In this case, in order to maximise her reward, the Follower will do best to select a strategy of always playing D, giving the Leader an expected reward of 3.5. It has been shown that this ability to select a strategy before her opponent results in the Leader of a Stackelberg game receiving *at least* as much payoff compared to if the same game is played simultaneously. In this sense, the Leader of Stackelberg game can be considered to have a tangible first-move advantage.
More recently, attention has been paid to the practical problem of computing optimal strategies in Stackelberg games. In 2006, [Conitzer and Sandholm](https://dl.acm.org/doi/10.1145/1134707.1134717) presented a number of results relating to such problems. In particular, they present efficient algorithms for computing optimal strategies in a number of simple game formulations: normal-form (non-Bayesian), pure-strategy games with any number of players; Bayesian, pure-strategy games with only two players and the Leader has only one type; and normal-form Bayesian games with only two players. For other game setups (varying whether the game is normal-form or Bayesian, the number of players, and whether or not mixed strategies are considered) it was shown that computing an optimal strategy is in fact NP-hard.
Recent work applying Stackelberg games to AI and robotics
---------------------------------------------------------
While there is a large corpus of work applying Stackelberg games to settings of security, I aimed to focus more narrowly on applications to the fields of AI and robotics.Below I summarise some recent papers that address this intersection.
In *'*[*Stackelberg Punishment and Bully-Proofing Autonomous Vehicles*](https://arxiv.org/abs/1908.08641)*'*, Cooper *et al.* introduce the notion of *Stackelberg Punishment*. They note that punishment in repeated games, whereby one player may intentionally select a strategy that reduces the payoff for their opponent at the cost of a reduction in their own payoff, can lead to mutually improved outcomes if it incentivises the opponent to pick an action that is beneficial to both players. *Stackelberg punishment* is then defined as a punishment strategy that sufficiently punishes the opponent according to some predefined target level, whilst minimising the cost to the punisher. They then adapt established algorithms for computing Stackelberg equilibria to instead compute a Stackelberg punishment, and demonstrate its effectiveness on a model driving scenario. In this toy problem, two vehicles are approaching a narrow bridge from opposite directions, one autonomous, one controlled by a human subject. Since the bridge is only wide enough for one vehicle, one must let the other pass first, following the usual right of way rules that the car that is closer to the bridge has priority. However, the vehicle that is farther from the bridge is able to ‘bully’ the other by continuing onto the bridge, despite not having priority, forcing the opponent to yield. In cases where the human subject ‘bullies’ the AI, the game was setup so that in the following round the AI would punish the human according to the optimal Stackelberg punishment. This experiment was carried out with the intention of exploring ways to prevent autonomous vehicles from being overly cautious when interacting with human drivers.
Secondly, in *'*[*Cooperative Control of Mobile Robots with Stackelberg Learning*](https://ieeexplore.ieee.org/document/9341376)*'*, Koh *et al.*propose an algorithm which they name *Stackelberg Learning in Cooperative Control*(SLiCC) that promotes the learning of cooperative behaviour between a pair of robots tasked with carrying an item that is too large for either of them to transport individually. To achieve this cooperative behaviour, they model the task as a *Partially Observable Stochastic Game* (POSG) which is decomposed into a set of Stackelberg games at each timestep. Deep reinforcement learning is then used to learn the payoff matrices for these games.
In *'*[*Effective Solutions for Real-World Stackelberg Games: When Agents Must Deal with Human Uncertainties*](https://dl.acm.org/doi/10.5555/1558013.1558063)*'*, Pita *et al.* address the implicit assumptions in Stackelberg games that the Follower is a rational agent with complete information, and thus reliably decides a strategy that maximises her payoff. They note that in practice, human players are unlikely to act in this way either due to bounded rationality or limitations in observing the Leader's strategy and suggest extensions to a given Stackelberg equilibrium algorithm ([*DOBSS*](https://dl.acm.org/doi/10.5555/1402298.1402348), introduced by Paruchuri *et al*.) that take into account these Follower characteristics. They show not only that their extended algorithms perform better than the baseline algorithm in cases where the follower does play suboptimally, but also that they achieve this in comparable runtimes.
My thoughts
===========
Differing notions of 'commitment'
---------------------------------
One thing that stood out to me when exploring the above papers (and wider literature of Stackelberg games) is that the term 'commitment' is used in a slightly different way than Dafoe *et al.* in *'Cooperative AI'*. In Stackelberg games, 'commitment' is used to refer to the fact that the Leader selects a strategy before the Follower, thus 'committing' to this strategy, *unconditional* on the Follower's choice. While this does correspond to a 'unilateral, unconditional' commitment in Dafoe's terminology, I found the formulation of a Stackelberg less enlightening than expected in terms of 'solving' the commitment problem of ensuring that an agent does indeed carry out the strategy that it commits to. As an example, reconsider the example Stackelberg game presented above. We saw that the optimal strategy for the Leader to commit to in this case is a mixed strategy of playing A or B each with probability 0.5. However, there is still a commitment problem here. Once the Follower selects her strategy of always playing D in order to maximise her payoff, what is stopping the Leader from reneging on her commitment. At this point in the game it is now even more beneficial for the Leader to *always*play A, ensuring a payoff of 4. As far as I have seen, the consideration of how to ensure that the Leader does indeed play its chosen strategy in such cases is not addressed in the Stackelberg game literature.
Focus on 'human-follows-agent'
------------------------------
Secondly, this, admittedly rather niche, area of research seems to go against the grain by considering situations in which the agent is given the role of a leader. In all examples I saw, the agent plays the role of Stackelberg Leader whereas a human or other agent is the Follower, giving the agent a strategic advantage. However, when viewed at a sufficiently high level, most of the other human-AI interaction literature focuses on how to give an agent the skills to follow a responsible human - a very basic example is a human-specified reward function and the challenge of ensuring that it produces the desired behaviour in an RL agent. Inverse reinforcement learning is perhaps an even clearer example, where the goal is to have an agent learn from observation of desired behaviour, most commonly from a human demonstrator. For me, having an agent follow and respond to a human overseer or instructor feels like a much safer bet from an intuitive alignment perspective.
Is game theory that applicable?
-------------------------------
Finally, despite the advances presented in the papers discussed above, I find myself questioning the magnitude of game theory's utility in addressing issues such as AI cooperation and alignment. Game theory takes player payoffs as axiomatic, using these given facts to explore courses of action and how desirable they are for each player. However, if we focus on questions of human-agent cooperation we eventually have to face the problem of encoding complex human utilities into the scalar values that game theory works with. It is not inconceivable that this is a harder problem than those dealt with within game theory, potentially drastically reducing game theory's potency in solving problems of human-agent cooperation.
Furthermore, I have developed the intuition that game theory is ill-equipped to deal with more complex cooperative scenarios, and may quickly become unwieldy due to its rather rigid structure. This is perhaps backed-up by Conitzer and Sandholm's computational hardness results discussed above. In the case of Stackelberg games in particular, computation of an optimal strategy is NP-hard for all but a few of the simplest game formulations, with restrictions on the number of players, whether the game is normal-form or Bayesian, and whether mixed strategies are considered. I would like to stress that this is still very much an intuition and I unfortunately have not had enough time to fully explore these questions.
Reflection on the project
=========================
In addition to giving me the opportunity and incentive to dive into and learn about topics outside my current focus, completing this project has taught me plenty about my own work practices as well as some of the challenges of carrying out individual research.
Finding a specific enough question is hard
------------------------------------------
This project went through quite a few iterations, having begun life as focusing on the comparison of near- and long-term problems and approaches in AI safety. Initially I followed a trail of thought that led me to explore topics in computational social choice theory and its applications to AI. While an interesting subject in its own right, this felt as though it was not going in a very productive direction, so I changed my focus to the Cooperative AI paper. Over time, this was narrowed down to the commitment problem, where the aim of the project would be to conduct a literature review of commitment in AI, using the section in the Cooperative AI paper as a launchpad. After realising that this was still a giant body of literature, I finally zoomed in further on Stackelberg games.
This initial exploration, changing of direction, and delayed narrowing down on a realistic-sized focus area took up quite a bit of time, and meant that I didn't have as much remaining time to read the relevant literature. I imagine that this 'problem formulation' period of research is something that I still underestimate, but I nonetheless feel that I should be able to improve on my own ability of finding tractable questions. Hopefully next time I do a similar project, I will be able to find a specific research question more efficiently in order to afford more time on actually exploring and answering it.
It's ok to present negative results
-----------------------------------
A final thought on this project is that it has allowed me to become more comfortable with accepting that a research project being successful is not the same thing as confirming your hypothesis or having a breakthrough revelation. In this case, having explored existing and potential connections between Stackelberg games and the commitment problem, I found myself putting less weight on my initial intuition that there would be a good amount of scope for useful connections between these topics. In spite of this, I would say that this still counts as a success. Oftentimes, starting with an intuition and weakening it through further research can be as useful and instructive as strengthening an intuition through research, and this is a notion that I feel I need to embrace further.
|
3592dbff-41b9-44f3-b476-1e5759b47953
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk
Abstract
--------
\*\*:\*\*
This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created and implemented. This creates a new set of key considerations for the field of AI governance and should influence the action of future policymakers. This essay examines some of the theories of the policymaking process, how they compare to current work in AI governance, and their implications for the field at large and ends by identifying areas of future research.
Keywords: [policymaking process](/search?q=policymaking+process); [AI risk](/search?q=AI+risk); [typologies of AI policy](/search?q=typologies+of+AI+policy); [AI governance](/search?q=AI+governance)
1. Introduction
----------------
Artificial intelligence, especially artificial general intelligence (AGI), has the ability to dramatically impact the future of humanity [[1](#B1-BDCC-03-00026)]. Notable researchers, such as Bostrom (2014), have expressed concern that advanced forms of artificial intelligence, if not aligned to humans values and wellbeing, could be potentially disastrous and pose an existential threat to our civilization [[2](#B2-BDCC-03-00026)]. The two main branches of research on risk from advanced AI are AI safety, which seeks to ensure that advanced AI is engineered in such a way that it will not pose a threat; and AI governance, which focuses on political and social dynamics (AI macrostrategy) and forecasting timelines for AI development [[3](#B3-BDCC-03-00026)]. Issues that AI governance looks at include arms race dynamics, social and economic inequality, public perceptions, issues in surveillance, and more.There has been a modest amount of work on developing policy solutions to AI risk, with a recent literature review by Baum (2017) [[4](#B4-BDCC-03-00026)] and Everitt (2016) [[5](#B5-BDCC-03-00026)] covering most of it. Some authors have focused on the development of AGI, with proposed solutions ranging from Joy (2000) [[6](#B6-BDCC-03-00026)] who calls for a complete moratorium on AGI research, to Hibbard (2002) [[7](#B7-BDCC-03-00026)] and Hughes (2007) [[8](#B8-BDCC-03-00026)], who advocate for regulatory regimes to prevent the emergence of harmful AGI, to McGinnis (2010), who advocates for the US to steeply accelerate friendly AGI research [[9](#B9-BDCC-03-00026)]. Everitt et al. (2017) [[5](#B5-BDCC-03-00026)] suggests that there should be an increase in AI safety funding. Scherer (2016) [[10](#B10-BDCC-03-00026)], however, at least in the context of narrow AI, argues that tort law and the existing legal structures, along with the concentration of AI R&D in large visible corporations like Google, will provide some incentives for the safe development of AI. Guihot et al. (2017) [[11](#B11-BDCC-03-00026)] also notes that attempts to future-proof laws tend to fail, and pre-emptive bans and regulation tend to hurt the long-term health of the field, instead arguing for a soft-law approach. Other authors have focused on the community of researchers, with Baum (2017) [[12](#B12-BDCC-03-00026)] promoting a social psychology approach to promote community self-regulation and activism, and Yampolskiy and Fox (2013) [[13](#B13-BDCC-03-00026)] advocating for review boards at universities and other research organizations.Some authors have advocated for an international approach to resolving AI risk. Erdelyi and Goldsmith (2018) [[14](#B14-BDCC-03-00026)] advocated for an international soft-law regime that would serve as a “international forum for discussion and engage in international standard setting activities”. Erdelyi and Goldsmith’s proposal, however, is not targeted towards AGI risk, although they could scale up to AGI. Wilson (2013) [[15](#B15-BDCC-03-00026)] and Bostrom (2014) [[2](#B2-BDCC-03-00026)], on the other hand, call for some form of international agreement or control on AGI R&D, with the former advocating specifically for a treaty.These approaches are necessary given some of the risks, including states pursuing AGI for unprecedented military and economic strength with destabilizing effects (Shulman 2009) [[16](#B16-BDCC-03-00026)], and the concentration of wealth and political influence in large corporations (Goertzel 2017) [[17](#B17-BDCC-03-00026)]. Questions regarding whether or not AGI R&D should be open sourced or not have been explored by Goertzel (2017) [[17](#B17-BDCC-03-00026)] and Bostrom (2017) [[18](#B18-BDCC-03-00026)]. Shulman (2009) [[16](#B16-BDCC-03-00026)] and Dewey (2015) [[19](#B19-BDCC-03-00026)] follow a different approach and advocate for a global surveillance regime to monitor for rogue AGI projects, with Goertzel (2012) [[20](#B20-BDCC-03-00026)] suggesting that a limited form of AGI could do this.As far as current and future research goes, the Future of Humanity Institute has developed an extensive research agenda [[3](#B3-BDCC-03-00026)] for AI governance, with three main research areas: Technical landscape, which seeks to understand what artificial intelligence can do and its limits; AI politics, which looks at the political dynamics between firms, governments, publics, etc.; and ideal governance, which looks at possible ways and arrangements for stakeholders to cooperate. This research agenda highlights key issues such as security challenges, international political dynamics and distribution of wealth, and arms race dynamics. Other researchers have published reports dealing with issues such as dual use, similarity, and possible interactions with the cybersecurity community [[21](#B21-BDCC-03-00026)] the role and limits of principles for AI ethics [[22](#B22-BDCC-03-00026)], justice and equity [[23](#B23-BDCC-03-00026)], and AGI R&D community norms [[5](#B5-BDCC-03-00026)].Thus far, much of the literature on AI risk has discussed policy issues, but few studies have talked about how policies are made or how the dynamics of the policymaking process affect their work. Calo (2017) [[23](#B23-BDCC-03-00026)] touches upon the problem, noting that there is a lack of institutional expertise, policy tools, and flawed mental models of what AI is, which plague governments’ abilities to regulate AI. Scherer (2016) [[10](#B10-BDCC-03-00026)] cites certain aspects of the technology itself, such as its ability to be created without special equipment, as a hindrance to the ability to regulate it. Everitt et al. (2017) [[5](#B5-BDCC-03-00026)] also briefly discusses policy and political dynamics in the context of AGI researchers, suggesting that AGI researchers should work with other organizations to mitigate the negative dynamics of framing AGI development as an arms race [[24](#B24-BDCC-03-00026)]. Finally, the Future of Humanity Institute’s research agenda for AI governance [[3](#B3-BDCC-03-00026)] touches on policymaking in a few ways, noting that public opinion can have major impacts on technology policy and governance schemes can be subject to mission drift and asking how to facilitate the transition from the present state of affairs to our ideal vision for the future.This paper continues along the lines of facilitating the transition from the present state to “our ideal vision” by exploring the missing discussion on the role of policymaking in AI governance. Research thus far has largely focused on what problems are out there and what should be done to fix them. However, this paper does not only argue that proposal implementation that takes into account the features of the ‘policymaking cycle’ may be vital to success in reducing AI risk but that this model actually has massive implications for the research field as a whole. Proposals will be much more effective if they are informed by an understanding of the political and administrative considerations of consensus-building and implementation and could make the difference between making an impact or none at all.The goal of this paper is to attempt to create a clearer launching point for discussions on the key considerations of the policymaking process for AI governance and the political considerations underpinning policy solutions for AI risk. The policymaking process includes: Problem identification/agenda setting, policy formulation, policy adoption, implementation, and evaluation. Each step of the policymaking process will have different aspects that are critical for the creation of public policies that are able to effectively reduce AI risk. Each section covers a brief overview of the literature, assesses its implications for the greater AI governance field, and identifies different points where further research is needed. The papers we selected are the primary sources of these different theories of the policymaking process.The first section maps out and defines terms in the field of AI governance, to give readers a better understanding of how our paper contributes to the way AI governance is approached. We also created a typology for AI risk policies, to provide an understanding as to how AI governance has implications in a diverse range of policy communities and how that interplays with strategic considerations. The next section goes through each step of the policymaking cycle, with a basic overview of some of the literature and discussing its implications for AI governance. It should be noted that the literature covered in each field is not extensive, and further research may be necessary. The last sections cover some of the key implications and limitations. 2. Terms and Definitions
-------------------------
On a broad level, the question of mitigating AI risk, or risks that stem from the development and use of artificial intelligence (such as global catastrophic risks from misaligned AI or military instability from adopting new types of weapons), is broken down into AI technical safety and AI governance. AI technical safety focuses on solving computer science problems around issues like misalignment and the control problem for AGI [[2](#B2-BDCC-03-00026)]. AI governance, on the other hand, studies how humanity can best navigate the transition to advanced AI systems [[3](#B3-BDCC-03-00026)]. This would include the political, military, economic, governance, and ethical considerations and aspects of the problem that advanced AI has on society.AI governance can be further broken down into other components, namely the technical landscape (how technical developments depends on inputs and constraints and affects rates or domains of capability improvement), ideal governance (what would we do ideally if we could cooperate), and AI politics (how AI will affect domestic politics, political economy, international relations, etc.) [[3](#B3-BDCC-03-00026)]. From these research areas, the problems and solutions necessary to discuss AI policy can be defined. This paper, however, refers to this as AI risk policy to differentiate policies intended to reduce catastrophic risk to society versus policies that apply to AI in any other circumstances.Policies, however, must be implemented into the legal statutes of government in order to work. Flynn (2017) [[25](#B25-BDCC-03-00026)], in the blog post that defines ‘AI strategy’ [[3](#B3-BDCC-03-00026)], also defines ‘AI policy implementation’, which is carrying out the activities necessary to safely navigate the transition to advanced AI systems. This definition implies it is action-oriented work done in government, policy, lobbying, funding, etc. As mentioned in the endnotes of Flynn (2017), however, there is an implicit gap between AI strategy (governance) research and policy implementation, with no AI policy research that identifies mechanisms for actualizing change.However, there is another gap that this paper intends to address, which is that the processes that create and implement policies (the policymaking process) often either distort the original policy, fall short of, or even work counter to the intended outcome, or render certain policy options unactionable. Similarly, The AI governance: A Research Agenda report has neither this consideration nor a definition of policy implementation. This paper intends to put forth a definition of AI policymaking strategy to fill this gap, which is defined as:AI Policymaking Strategy: A research field that analyzes the policymaking process and draws implications for policy design, advocacy, organizational strategy, and AI governance as a whole.This goes further than the concern listed in the endnotes and also develops an upstream approach to AI governance, where work in implementation in turn feeds back and can provide new insights to AI governance research.AI policymaking strategy would fit under the definition of AI governance and would be its own subfield in the same way technical landscape is and would help to clarify questions and considerations in the other subfields. AI politics and ideal governance seem to ask questions about what risks humanity faces and what it ought to do about them, approaching the world as if from above and making corrections, whereas policymaking strategy asks questions about how and what can be done, given both present and future circumstances, and the methods to do so at hand. They approach the world as agents who individually influence the trajectory of the world. These two groups, when they work together, should ideally converge on a policy program that both works and is pragmatic—constituting of policies that both aim at the correct goals and can actually get there.An example of this would be the proposed solution by Goertzel (2012) [[20](#B20-BDCC-03-00026)] of creating a surveillance artificial narrow intelligence that monitors the world to prevent the development of superintelligence. Let us say that Policy X is written to do this. However, Policy X, like all other policies, is not simply just a solution to the problem but a set of intended actions and procedures taken by the government that must first be passed by government [[26](#B26-BDCC-03-00026)]. This begs three questions: Can this policy realistically be implemented by government? How do policymakers ensure that Policy X results in the intended outputs and outcomes? And how can policymakers create policy and advocacy strategies to increase the chances of both of these happening? For example, while Policy X is intended to install a surveillance apparatus to prevent superintelligence, would Policy X still have that output and outcome after going through the legislature and executive branch? Is there a chance over time that it would result in mission creep? Policymakers can also develop strategies to ensure that Policy X has its intended outcomes, such as oversight mechanisms within the policy itself. Policymakers can go a step further and ask how the policymaking process itself creates implications for the AI governance field. For example, are there restrictions within the policymaking process that impact timelines for reducing risk, such as how fast governments can act or create new laws? Could some form of upstream innovation be acheived where the policymaking process inspires or generates new ideas for AI governance [[27](#B27-BDCC-03-00026)]? 3. Typologies of AI Policy
---------------------------
Before this paper can delve into the policymaking process, AI policy needs to be further refined to understand what kind of policies are being made. The point of this section is to show that AI risk policies are not monolithic, but rather there are multiple approaches to help achieve the same goal, and each set of these policies is going to have with it a different set of political difficulties. It also begs the question in terms of AI governance as a whole as to which sets of policies should be implemented and when, and which policies should be considered relevant to AI risk. In the same way that Bostrom (2014) [[2](#B2-BDCC-03-00026)] argues that there may be a preferred order of technological development, there is a similar analog with AI risk policies where there is a strategic order to policies that should be attempted to be implemented, whether it is because their political-capital cost is lower, the cost of failure is lower, or because it helps with future efforts to implement policies (such as the creation of an advisory body).A typology of AI policies already has some previous explorative work to build on. Brundage (2016) [[28](#B28-BDCC-03-00026)] proposed the idea of De Facto AI policies. These are policies that already exist and are relevant to AI. These are further broken down into direct, indirect, and relevant policies. Direct policies are policies that specifically target AI, such as regulations on self-driving cars. Indirect policies are policies that do not specifically target AI but generally impact the development and diffusion of technologies (including AI), such as intellectual property laws and tort law. Relevant policies do not immediately impact AI but are still worth considering because of their impact, such as education policy or the use of electronic medical records.Brundage (2016) [[27](#B27-BDCC-03-00026)] in this paper, however, does not talk about AI risk policy but rather existing policies around AI as a whole. However, the classification used in this paper is useful overall and can be extended into AI risk policy. Instead of whether or not it directly or indirectly affects AI, AI risk policy can be classified into whether or not it directly or indirectly aims at reducing AI risk. Direct AI risk policies would explicitly govern the use, development, deployment, etc. of AI to reduce risk. Examples of direct AI risk policy could include funding for AI safety research, rules for the development of AGI, international agreements on AI, etc. Indirect AI risk policies would either affect AI but not explicitly govern it or address consequences of the use of advanced AI systems. This could include both policies that affect AI and those that are AI-agnostic. For example, a policy that puts in place stronger protections for privacy in general would reduce the amount of training data available, and thus the speed of AI development, and could be considered an indirect approach. An AI-agnostic policy, for example, would be basic minimum income to address technological unemployment, which could be considered a risk if it leads to societal destabilization. AI risk relevant policies would affect neither AI nor the consequences of it but would rather make it easier for sound AI risk policies to be developed and implemented, such as changing the rules and procedures of government itself to alleviate the pacing problem.There is another layer of classification that should be applied to AI risk policy based on Lowi’s Typology [[29](#B29-BDCC-03-00026)]. Lowi categorizes policies into regulatory, distributive, redistributive, and constituency categories. Regulatory policies regulate one’s behavior, restricting or incentivizing certain actions, such as the mandating of seat belts in cars. Distributive policies are policies that take money from the general treasury and use them for a specific project that directly benefits one group, such as a dam or research grants. Redistributive policies are those which fundamentally alter the distribution of wealth and resources in the whole of society, such as tax and welfare policies. Constituency policies are those that alter the composition and the rules and regulations of government, such as creating a new executive agency.Each one of these typologies has with it a certain set of political conditions, as they impact people, businesses, and members of government differently. For example, both basic minimum income and the creation of AI safety standards are policies that are intended to reduce existential risk. However, both of these policies will have a different set of political pressures. Basic minimum income is a redistributive policy, which would move substantial amounts of wealth between classes of society. This would mean that it would likely become a nationwide controversial issue with two opposing camps based largely on who benefits and who loses. By contrast, AI safety standards are a regulatory policy, and while there would be two groups opposed to each other on the issue (unless it comes in the form of voluntary self-regulation by the industry), the political factors around it would look different. Regulatory policies are not usually salient or popular to the general public, and thus, the political battle would be largely limited to regulators, experts, and the business class. This typology will help us to understand how the different policies will be treated in the policymaking process. In other words, policy creates politics. Further work on developing this might be useful for understanding the likelihood of policies being adopted and could shift strategies for which policies to pursue. 4. The Policymaking Cycle
--------------------------
#### 4.1. Problem Identification, Agenda Setting, and Policy Formulation
The first few steps of the policymaking process: Problem identification, agenda setting, and policy formulation, are usually tied together [[30](#B30-BDCC-03-00026)], including in a so-called ‘multiple streams framework’. The multiple streams framework attempts to explain how policies reach the agenda when policy entrepreneurs are able to couple the policy, politics, and problems streams to open up a policy window, the opportune time when all the conditions are right to get a policy on the agenda [[31](#B31-BDCC-03-00026)].#### 4.1.1. Problem Stream
There are many problems in society. However, the public does not seek government intervention for many of these problems. There are some basic requirements for an issue in society to become a policy problem, which is that it is something that the public finds to be intolerable, government can do something about, and is generally seen as a legitimate area for government to work on [[30](#B30-BDCC-03-00026)]. Policy problems can also arise when there are two or more identifiable groups who enter into conflict in a policy arena for resources or positions of power [[32](#B32-BDCC-03-00026)].The first condition for an issue to be considered a policy problem is that it is something that the public or a group finds to be intolerable. Indicators such as statistics can help to identify a problem. These can be used objectively, for understanding conditions in society, or politically, when they are used to justify a political position: for example, using gun violence statistics as an argument for gun control. What is considered an issue over time changes because of the evolution of society. Changes in values, distribution of resources, technology, etc. will change what issues are considered in society [[30](#B30-BDCC-03-00026)]. In AI governance, identifiers such as the rate of technological progress or the proliferation of autonomous weapons could be used as examples. Creating a list of politically salient identifiers or metrics could be potentially useful for creating long-term strategies and goals.How the issue is framed is very important for whether or not it will be considered a policy problem [[30](#B30-BDCC-03-00026)]. Is mandating seatbelts in cars beneficial for public safety? Or is it paternalistic? Are these problems legitimate for government to handle? The framing of a problem can have an overwhelming impact on whether or not it is considered a problem appropriate for government to even formulate policy on. It can also impact the content of the policy. Whether you define access to transportation for handicapped people as a transportation problem or a civil rights issue determines whether the acceptable solution involves buying special needs vans, or costly upgrades to buses and subways to ensure equal access. Framing can also raise the priority of a policy problem by, for example, calling it a crisis and raising a sense of urgency.The question of framing is also incredibly important for AI governance. For example, would autonomous weapons make war more humane by removing humans? Or will it distance ourselves from the violence and make us more willing to use them? The AI governance community needs to think about how these issues ought to be framed, and the consequences of doing so.In order for an issue to be a part of the system agenda, or what the public or specific communities are discussing, there must be a focusing event. Focusing events are specific events that draw attention to a problem in society and the reasons behind it. The Sandy Hook school shooting, for example, is a focusing event that drew attention to America’s gun laws. Moreover, events that occur outside of sector-specific focusing events [[31](#B31-BDCC-03-00026)], or past policies on these issues, can have a large impact, especially on the types of solutions used. For AI governance, “Sputnik moments” such as AlphaGo beating Lee Sedol would be an example that drew considerable media attention and generated much discussion about the future of AI, especially in China [[33](#B33-BDCC-03-00026)].Understanding how to exploit these events for the AI governance agenda will be key to generating support and getting policies on the agenda. It is also important to stay on top of these events to understand the direction society is heading in—and to pre-empt or avert less productive or dangerous framings that might feed into arms races [[31](#B31-BDCC-03-00026)]. For example, Yampolskiy (2018) details a list of past failures by AI-enabled products [[34](#B34-BDCC-03-00026)]. How could work like this be used to influence the problem-setting? Could other AI risk researchers expand on it and build that work into a more thorough project to be used to draw attention to AI risk? Or, could attempts such as this backfire and cause pre-emptive stigmatization or ineffective policies?#### 4.1.2. Politics Stream
The politics stream is the combined factors of the national mood or public opinion, campaign groups, and administrative/legislative change. Decision-makers in government keep tabs on the swaying opinions of the masses and interest groups and act in a way that promotes themselves favorably, changing items on the agenda to stay relevant and popular, and to obscure unpopular policy stances. Changes in administration, especially when there is a major shift in the ideological composition of the institution, have a strong impact on what is included or not included on the agenda [[31](#B31-BDCC-03-00026)].In AI governance, and for people involved in advocating and implementing policies, maintaining a key eye on domestic and international politics will be key. Knowing when and what kind of policy to advocate for, and to whom, is crucial not only to saving time and energy, but also for legitimacy. Trying to sell a nationalistic administration on greater UN involvement will probably not help someone with furthering their policy proposals and may even damage their (and their coalition’s) political capital and cause. However, other forms of cooperation, such as bilateral cooperation for reducing the risk of accidents [[35](#B35-BDCC-03-00026)], may be more promising.AI governance researchers will need to consider how the political landscape should shape their recommendations or policy proposals. Not only would it determine if their recommendations would ever get considered, but if it was implemented, how would it affect the national mood? Would the next administration simply walk it back? How would other interest groups react and impact the long-term ability to reduce risk? If administration changes result in a flip-flop of ideology, what does that mean for AI risk policies associated with the past administration? Could an AI risk policy group maintain influence throughout changing administrations? All of these have implications on our ability to reduce AI risk, and this means that the policymaking strategy will not only have to be robust but also flexible enough to survive changing political conditions.#### 4.1.3. Policy Stream
The policy stream, which is in essence the policy formulation aspect of the policy cycle, is the “soup” of ideas that are generated by policymakers [[35](#B35-BDCC-03-00026)] when deciding what to do about a problem. Different policy networks create policies differently, with different levels of innovativeness and speed [[35](#B35-BDCC-03-00026)]. Understanding these differences and examining their implications for the AI governance field might be useful to understand its long-term impact and the specific strategic routes it should take. In other words, how should the AI governance research field itself be organized in a way that promotes useful and relevant solutions?Despite the staggering number of policy proposals coming out, only a handful will ever be accepted. These policies compete with one another and are selected on a set of criteria, which include technical feasibility, value compatibility [[35](#B35-BDCC-03-00026)], budgetary and political costs, and public acceptance. Policies that work will also be technically sound, with no major loopholes, and a clear rationale for how its provisions would lead to actually achieving the policy objectives [[30](#B30-BDCC-03-00026)]. This actually creates some key considerations for the field. It means that many ideas are either functionally useless due to their political limitations, unlikely to be adopted in the face of easier or less politically costly options, do not have viable policy mechanisms to achieve their goal, or are otherwise intractable prospects for government. Even if all of the above conditions are resolved, loopholes and unintended consequences may neuter the policy or make conditions worse. This vastly reduces the space of possible solutions. Further, even though the ability for policy implementation or values might change over time, it is still a matter of how much and when. This begs the question: What problems can be solved when, how, and by whom? What does that mean for the large picture strategic approach?Where should our policies originate from? While there are a bunch of policy ideas out there, only a few are ever seriously considered for adoption. Sources of these policies include (in the United States Federal Government, for example) the President along with the Executive Office of the President, Congressional leaders, government agencies (mostly small incremental changes and adjustments), temporary organizations or ‘adhocracies’ that serve to investigate specific topics, and interest groups whose topical expertise and political power can sometimes make them de facto policymakers. Each of these areas have differing levels of legitimacy, influence, and degree to which they can make policy changes. A question to consider is not only where in the policy network AI risk policymakers should focus on making these policies, but where they can best advocate for the creation of additional bodies like adhocracies to create additional policies, and what implications that has for the field at large.With regard to the policy formulation phase of policymaking, a continuum of political environments has been created such that on one extreme, there are policies with publics and on the other, there are policies without publics [[36](#B36-BDCC-03-00026)]. When policies are formulated, it is important to consider political environments relevant to the issue. The term “publics” refers to groups who have more than a passing interest in an issue or are actively involved in it. It appears that AI risks are issues where there are limited incentives for publics to form because of problems being remote, costly, or even abstract and uncertain. What does this mean for the AI safety community? How can interest groups be created most effectively? How can these issues be best expressed so that they do not seem so remote, abstract, or uncertain?#### 4.1.4. Policy Windows and Policy Entrepreneurs
This framework assumes that policy decision-makers, the legislators and bureaucrats in government exist in a state of ambiguity, where they do not have a clear set of preferences, and each set of circumstances can be seen in more than one way. This cannot be resolved with more information, as it is not an issue of ignorance. The example that Zahariadis (2007) gives is that “more information can tell us how AIDS is spread, but it still will not tell us whether AIDS is a health, educational, political, or moral issue [[31](#B31-BDCC-03-00026)]”.Overall, the multiple streams framework describes government organizations as “organized anarchies” where institutional problems run rampant, there are often unclear or underdefined goals, overlapping jurisdictions, and a host of other problems that mean that decision-makers have to ration their time between problems and do not have enough time to create a clear set of preferences, make good use of information, or take the time to comprehend the problem for sound decisions on policies. In essence, decision-makers are not rational decision-makers by any stretch. Instead, it depends on the ability of policy entrepreneurs to couple the three streams and manipulate the decision-maker into achieving their intended policy goals [[31](#B31-BDCC-03-00026)].Policy entrepreneurs, who are the policymakers, advocates, interest groups, etc. who push to make specific legislative changes in their areas, only have a short window of time to have their proposals added to the formal agenda. It is when the right political environment, a timely problem, and a potentially acceptable solution all meet together with a policy entrepreneur who can manipulate the situation to their advantage. Because decision-makers exist in a state of ambiguity, policy entrepreneurs are able to manipulate their interpretation of their information to provide meaning, identity, and clarity.Policy entrepreneurs use different tools and tactics to manipulate the way decision-makers process information and exploit their behavioral biases. Framing tactics, for example, can be used to present a policy option as a loss to the status quo, not taking note of the degree of loss it creates, exploiting decision-makers who are loss-averse, and may push them towards more extreme options like going to war to make up for those small losses [[31](#B31-BDCC-03-00026)].The manipulation of emotions through symbols and the identity or social status of a decision-maker can also pressure them to make certain choices; policies around flag-burning are a great example of this. Because decision-makers are under a great deal of stress and are time-constrained, the strategic ordering of decisions, or ‘salami tactics’, creates agreement in steps by reducing the total perceived risk of a policy [[31](#B31-BDCC-03-00026)]. The manipulation of symbols in the way that artificial intelligence is being framed today has already occured. At first, anti-autonomous weapons advocates were describing ‘armed quadcopters’ as a serious problem with little media attention [[37](#B37-BDCC-03-00026)]. These were rebranded as ‘slaughterbots’ and a short-film was released with substantial media attention. However, what sort of long-run impact will this have on the field? While giving policymakers straight facts and solutions seems appealing, AI risk policymakers have to consider that it is impractical in reality and may have to accept the inevitability, to policy success, of tactics like framing. Which begs the question, which tactics should they use and how? Questions like these must be considered.All of this strongly requires an appropriate consideration. Consider, if there are some problems that can only be resolved through state action (such as an arms race), that means that it is dependent on the policymaking process, and thus, these solutions can only be passed when policy windows open. Therefore, how many of these opportunities do AI risk policymakers get? Or, how many chances do they get to implement AI risk policies? These windows only open every once in a while, and they are often in fragile conditions. For example, Bill Clinton’s campaign in 1992 aimed to reform the healthcare system and made it a campaign priority, but his administration’s failure to pass the bill closed the window [[31](#B31-BDCC-03-00026)]. In other words, what impact does this have on AI governance and policy implementation timelines and what does that mean for the field as a whole?However, in order for a policy entrepreneur to manipulate decision-makers, they must have access to them, which is highly dependent on both the legitimacy of their issue but also for the legitimacy of the group itself and their interest. One of the ways that policy entrepreneurs increase their own influence is to create new decision-points that they can exploit and to reduce access of other groups [[32](#B32-BDCC-03-00026)]. AI risk policymakers and advocates will have to find some way to gain access to decision-makers. For example, working on near-term or non-existential risk issues with AI might help someone to build the social capital and network that is necessary to work on existential risks issues. This would not only make it easier people in the field to implement their solutions but to also make themselves gatekeepers to the decision-makers, which could help with preventing policies that would increase existential risks (whether from AI or other sources) from getting through. This may be an area that needs further research. Aspects such as a group’s access to decision-makers, the advocating group’s legitimacy, biases of the institution [[38](#B38-BDCC-03-00026)], and a group’s ability to mobilize resources will determine what gets added to the agenda, and the AI risk community will need to work on building all of these. AI policymakers will need to develop a strategy for how to get the right people into the right places and how to coordinate between different groups.Getting on the formal agenda is a competitive process because there are fundamental limits to a decision-maker’s time, and because the policy may be perceived to harm the interests of other groups. Opposing groups can use a variety of tactics, such as denying that the problem exists, arguing that it is not a problem for government, or arguing that the solution would have bad societal consequences, to deny it agenda status. Other factors that could deny an issue agenda status include changing societal norms, political changes, or political leaders avoiding having to be confronted by an issue that hurts their interests. Thus, AI policymakers will need to know how to overcome and adapt to these changing situations and other organizations preventing their policies from being adopted.AI governance and policy experts will need to pay attention to the arguments being used for and against superintelligence, and whether or not this will become a political issue. Baum (2018) notes that superintelligence is particularly vulnerable to what is known as politicized skepticism, skepticism that is not based on an intellectual disagreement about the problem, based on good-faith attempts to understand the arguments, but rather to shut down concerns based out of self-interest (or a conflict of interests). Some major AI companies, and even other academics, have criticized the idea of superintelligence out of what seems to be their own self-interest as opposed to genuine concerns [[39](#B39-BDCC-03-00026)]. This would have a devastating impact on AI policy advocates in a similar way that the tobacco industry significantly impacted scientific efforts to study the public health links between tobacco and cancer.#### 4.2. Policy Adoption
The next stage of the policy cycle is policy adoption, or when decision-makers choose an option that adopts, modifies, or abandons a policy. This does not necessarily take the form of choosing from a buffet of completed pieces of policy, but rather to take further action on a policy alternative that is more preferable and that is more likely to win approval. At this point, after much bargaining and discussion, the policy choice will only be a formality, or there will be continuous discussion and disagreement until there is a formal vote or decision made. This is an important field to analyze for AI policymakers for the obvious implication that they will want their policy proposals being chosen, and so they will need to understand and design strategies to do so. Further, as will be discussed later, when changes do occur, they can often bring with them wider changes in public policy [[40](#B40-BDCC-03-00026)], an implication that will need to be taken into account.The advocacy coalition framework is a theory on policy adoption but also incorporates every other aspect of the policy cycle with it. The theory describes the interactions of two or more ‘advocacy coalitions’; groups of people from a multitude of positions who coordinate together to advocate for some belief, or to implement some policy change (potentially over many fields) over an extended period of time [[41](#B41-BDCC-03-00026)]. These do not need to be a single, explicitly delineated organizations like the National Rifle Association but could include loosely affiliated groups of organizations and/or individuals, all working towards the same goal. Building and maintaining coalitions will be one of the major tasks that AI policymakers will need to work on, and so, examining this framework will be highly valuable.What is it that binds a coalition together? All advocacy coalitions share some form of beliefs. However, the advocacy coalition framework uses a hierarchical belief system. The deepest and broadest of these are deep core beliefs, which are normative positions on human nature, hierarchy of value preferences (i.e., should we value liberty over equality?), the role of government, etc. Policy core beliefs are the next stage of the hierarchy, which involves the extension of deep core beliefs into policy areas. Both of these areas are very difficult to change, as they involve fundamental values. This actually creates an issue where, due to differing fundamental and personal values which lead to lack of interaction, different coalitions often see the same information differently, leading to distrust. Each may come to see the other side as “evil”, reducing the possibilities of cooperation and compromise [[41](#B41-BDCC-03-00026)].The deeply held convictions of what a policy subsystem ought to look like are called policy core policy preferences and are the source of conflict between advocacy coalitions. They are the salient problems that have been the long-running issues in that area for a time. Policy core policy preferences shape the political landscape, dictating who allies with whom and who the enemies are, and what strategies coalitions take.The final level of the belief hierarchy are secondary beliefs, belief that cover procedures, rules, and things of this nature. These are very narrow in scope and the easier to change, requiring less evidence and little bargaining to change.Understanding the values and beliefs of different existing coalitions, groups, and individuals is key to building and maintaining new coalitions for AI policymakers. This brings up a few considerations. Since it is difficult for conflicting coalitions to work together, will AI policymakers have to choose certain coalitions to work with? What are the costs, benefits, and the potential blowback of this? Since some policies related to AI risk are not in a mature policy field (and thus do not have established coalitions), what can be done to shape the field beforehand to their advantage and/or promote cooperation among coalitions that are likely to form? Further, since secondary beliefs are relatively easy to change, what can be changed to help reduce existential risk?On a macro-level, this AC Framework acts as a cycle. Relatively stable parameters, as mentioned before, exist in the status quo since policy arenas usually come to some equilibrium where one coalition dominates the policy subsystem. Then, policy changes made by an advocacy coalition or an outside event create a fundamental change in the world, whether it is a change in public opinion or in the rules and procedures governing a subsystem, which changes the initial stable parameters, such as a major event like a mass shooting. These lead to a shift in power that allows another coalition to gain influence over the types of policies being adopted. However, especially in the case of controversial legislature, policies that require multiple veto points to pass will create access for multiple coalitions. This means that even a coalition that dominates a subsystem will not have unilateral ability to dictate policies in some situations. Others, however, especially where there are few decision-makers or an exceptionally influential decision-maker, can result in highly monopolized systems. Questions such as how to be resilient to these changes in conditions, how to facilitate changes into conditions that are beneficial to AI policymakers, and how to construct policy subsystems in a way that is conducive to AI policymakers’ goals are useful questions to consider.This theory describes policy adoption on a very broad level, but how do the decision-makers themselves decide which policies to move forward with? Different incentives and restrictions come to play at different levels of policymaking. For example, highly salient and popular issues are more likely to be influenced by popular opinion, whereas obscure technical issues will likely be determined by policy experts in that field. Different factors that affect both individual and group decision-makers also come into play, such as their personal, professional, organizational, and ideological values. For legislators, their political party and their constituency also play an overwhelming role in their decision-making. Understanding and mapping out these factors will be necessary for the successful implementation of AI risk policy.On top of these factors, decision-makers usually never have the time, expertise, or even care enough to be able to come up with a fully rational approach to deciding most policies. In many cases, legislators will seek out the advice of other legislators and experts and follow their lead. Due to this being a widespread practice, a few key institutions and leaders often have disproportionate power. For those working in AI risk policy, it is necessary to understand these things so that the message they craft for as to why policy change should occur, and whom to specifically target to get widespread adoption from other decision-makers in the policy arena.#### 4.3. Policy Implementation
Policy implementation is a key step in the policymaking process. It is defined as “whatever is done to carry a law into effect, to apply it to the target population … and to achieve its goals” [[30](#B30-BDCC-03-00026)]. In other words, it is the activity where adopted policies are carried into effect [[30](#B30-BDCC-03-00026)]. However, that is not to say that it is a very distinct step that can be clearly distinguished from others. Every implementation action can influence policy problems, resources, and objectives as the process evolves [[42](#B42-BDCC-03-00026)]. Policy implementation can influence problem identification, policy adoption, etc.Two broad factors that have been offered for the success of policy are local capacity and will [[42](#B42-BDCC-03-00026)]. In other words, is there enough training, money, and human resources, along with the right attitudes, motivation, and beliefs to make something happen? It is suggested that the former can be influenced much more easily than the latter as more money can be received and consultants can be hired. For AI risk, both questions are relevant: How to increase capacity and how to influence the influencers. With the former, it has been estimated that about $9-$20 million is currently spent on AI risk [[43](#B43-BDCC-03-00026),[44](#B44-BDCC-03-00026)]. With the latter, studying the opinion of the public as well as experts might be a useful approach. One survey [[45](#B45-BDCC-03-00026)] indicates that only 8% of top-cited authors in AI consider that human-level AI would be extremely bad (existential risk) for humanity. Another survey that is more recent [[46](#B46-BDCC-03-00026)] indicates that machine learning researchers think on average (median) that there is a 10% probability that human-level machine intelligence will result in a negative outcome and 5% probability that it will have an extremely bad outcome (existential risk). The general public seems to be generally cautious, with a survey showing 82% of Americans believing that AI/robots should be managed carefully [[47](#B47-BDCC-03-00026)].This part of the policymaking process is very difficult as the literature is generally quite pessimistic about the ability of policies to bring social changes into effect [[48](#B48-BDCC-03-00026)]. However, the authors of the cited paper have identified conditions of effective implementation based on successful examples. These conditions are (a) the policy is based on a sound theory of getting the target group to behave in a desired way, (b) policy directives and structures for the target group are unambiguous, (c) the leaders implementing the policies are skillful with regard to management and politics and committed to the goals, (d) policy is supported by organized constituency groups and key legislators as well as courts throughout the implementation process, and (e) the relative priority of policies is not significantly undermined over time by other policies or socioeconomic changes. Additionally [[49](#B49-BDCC-03-00026)], having carefully drafted statute that incentivizes behavior changes, provides adequate funds, expresses clearly ranked goals, is an implementation process, and has few veto points is also vital to the success of a policy.With regard to AI governance, the ambiguity and complexity of the problem creates a major hurdle for effective policies to be developed. These problems are nonlinear, very hard to predict, and may have the traits of wicked problems in the sense that solving one problem can create new problems. Breaking down AI risk policy into multiple domains as discussed in the previous section helps with creating somewhat less ambiguous objectives, such as changing the education system to be more conducive for technological growth. Even then, however, because many of the issues are either complex or have not happened yet, it is difficult to create concrete objectives and policies. AI risk is not like noise pollution, where there is an easily identifiable, manageable, and tractable problem. Further research could help to identify concrete and tractable issues that might lead to a reduction of risk. In addition, when trying to develop and implement policy, AI policymakers will need to keep in mind factors such as to what extent there is support for it in the executive branch, with outside organizations, and how exactly the policy is written and how those change throughout the policymaking cycle.Another key consideration for successful policy implementation that was identified from the literature is engaging with the community to increase readiness to accept and devote resources to policy-related problems. It has been acknowledged that there are no good evidence-based ways of achieving community buy-in. This is an area that might be useful to study in order to increase the chances of successful reduction of AI risk. There are different stages of community readiness, such as no awareness, denial, and vague awareness to preplanning, preparation, initiation, and stabilization phases [[49](#B49-BDCC-03-00026)]. It is important to understand what counts as the community and what phases different subcommunities of AI safety field are in. Earlier, this paper mentioned a survey about AI experts and showed that their readiness with AI risks was low. Other relevant experts, the public, and other types of subcommunities might have different levels of readiness.It has been suggested that “the more clearly the core components of an intervention program or practice are known and defined, the more readily the program or practice can be implemented successfully” [[49](#B49-BDCC-03-00026)]. In other words, policies and steps of implementation of those policies have to be very clearly expressed. What implications does this have for AI risk? Researchers and policymakers should evaluate how clearly core components have been expressed in this field and improve them as necessary.#### 4.4. Policy Evaluation
The final step in the policymaking cycle is policy evaluation. This includes activities related to determining the impact of the policy, whether it is achieving its goals, whether the rules and procedures it lays out are being followed, and other externalities or unintended consequences [[30](#B30-BDCC-03-00026)]. As we have explained before, policy evaluation does not have to occur only at this step. For example, the impact of a policy is estimated already in the early stages. Anderson et al. highlighted different types of policy evaluations in their book but especially considered systematic evaluations of programs. This involves “the specification of goals or objectives; the collection of information and data on program inputs, outputs, and consequences; and their rigorous analysis, preferably through the use of quantitative or statistical techniques” [[30](#B30-BDCC-03-00026)]’.Policy evaluation examines a policy to understand its impacts in multiple ways [[30](#B30-BDCC-03-00026)]. First, is the policy affecting the population that it is intending to target? In AI risk policy, this could be anything from large tech companies, to AI researchers, to people affected by technological unemployment. Second, are there populations that are being affected that were not intended? These externalities could be positive or negative. Third, what are the benefits and costs associated with this policy? AI policymakers will want to ensure that their policies actually reduce risk and that the costs are not so astronomical that they become politically infeasible. Finally, what long-term costs and benefits does a policy have? This is especially important for AI risk policy, as decisions now could have a major impact on the long-term risk that AI has. In AI governance and policymaking, research needs to be done on what sort of indicators or metrics are used for the reduction of risk, and for identifying what goals that should be achieved.If the previous steps in the policymaking process have generated goals that are unclear or diverse, it is very difficult to evaluate the impact of the policy [[30](#B30-BDCC-03-00026)]. Different decision-makers can more easily reach a differing conclusion about the results of a program in that case, or may not follow it all [[30](#B30-BDCC-03-00026)]. How the goals of an AI risk program are defined is, therefore, very important.Another key consideration for policy evaluation is how to make sure that the results are objectively measured. Agency and program officials may be wary of possible political consequences of the evaluation process [[30](#B30-BDCC-03-00026)]. If it turns out that the program was not useful or even detrimental, this might have consequences to their influence and career. Because of this consideration, they might not be very interested in correct evaluation studies or they may hinder the process in some other way. There are many ways an evaluation of a policy might be ignored or attacked, such as claiming it was poorly done, the data were inadequate, or the findings inconclusive [[30](#B30-BDCC-03-00026)]. Thus, it is important that researchers are provided with high-quality and relevant data-sets that are accurate.There is also the distinction between policy outputs and outcomes [[30](#B30-BDCC-03-00026)] to consider. Outputs are tangible actions taken or things produced, such as collecting taxes or building a dam. Outcomes, on the other hand, are the consequences for society, such as lower disposable income or cleaner air quality. Outputs do not always produce the intended outcomes, which is highly evident in areas such as social welfare policy, where policies may unintentionally trap people in poverty. For AI policymakers, it is very important to consider whether their policy outputs will have the intended consequences, and if so, how to correct that policy.The evaluation of a policy and the political responses to it can result in the termination of it [[30](#B30-BDCC-03-00026)]. Assuming that AI risk policymakers do not want their policies to be terminated or altered in a detrimental way, how can they make sure this does not happen? A policy getting altered to be more effective might be a good thing, but termination can bring unpleasant and negative connotations. It might even have negative consequences to the community [[30](#B30-BDCC-03-00026)]. What exact consequences might it have politically? Further, it is important to remember that many policymakers’ time horizon only goes until the next election, and so, they often seek immediate results, often before the returns come into fruition. While this may not impact all policies, as this mostly applies to salient policies like healthcare and education, AI policymakers should keep this in mind and try to understand how it might impact their work. 5. Conclusions
---------------
There are multiple policy options that could be chosen that either directly or indirectly reduce AI risk, or relevant policies that could help with further efforts to reduce AI risk. Because different policy arenas have different political conditions, and the policymaking process itself draws a number of important challenges, this brings up questions as to what policies in what order are chosen, what strategies are used to get these policies passed and implemented by the government, and the larger impact of these choices on AI governance and risk as a whole. This paper argues that a new subfield of AI governance research on AI policymaking strategies should be further investigated to draw implications for how these policies should be designed, advocated for, and how organizations should approach solving this issue. 6. Limitations and Future Research
-----------------------------------
This paper is intended to be a broad overview and to be a conversation starter for future research into this area. Thus, there is a strong limitation to the depth of research in this paper. However, it is expected that future work will be done to further refine the line of thinking laid out above, along with further in-depth study into the different theories and their applicability to AI risk.One of the major limitations of this paper is that the stages heuristic presented in this paper has been heavily criticized and is subject to debate about its effectiveness. Sabatier (2007) has criticized it for not being a causal theory, having a strong top-down bias, among other critiques. However, he also notes that there is much up to debate, with some scholars such as Anderson (2010) advocating for it. There are also a number of other theories that were not discussed in this paper, such as Institutional Rational Choice, the punctuated equilibrium framework, the policy diffusion framework, and other lesser-known theories. Future research is expected that will explore which policy frameworks should be focused on in AI risk research.The other limitation of this paper is that its applicability to the international governance of AI was not discussed. Future research that looks at how much these theories apply to foreign policy and the international governance of AI in general would be useful. If these theories have a very limited or no impact on the international governance of AI, then figuring out how much work can be done to reduce AI risk in domestic policy would determine the usefulness of these theories.Throughout the paper, a number of key considerations have been raised. For convenience, a list of them has been curated below below. 7. Summary
-----------
This part of the paper summarizes and lists some of the key questions and considerations brought up in the discussion.Thesis level consideration:
\* How do the politics and administrative mechanisms of policymaking affect how policies to mitigate AI risk are created and implemented?
Considerations from Typologies of Policies:
\* Are there AI risk policies that should be implemented first? What are the methods to decide this?
\* What types of policies should the AI risk policymakers try to get implemented? Why should those types be prioritized?
\* What are the political considerations surrounding different sets of policies, and how does that affect their ability to be implemented?
Considerations from Problem Identification, Agenda Setting, and Policy Formulation:
\* Is this issue or policy legitimate?
\* Would the policy be supported by the current administration and be able to be maintained through changing administrations?
\* Which policies out of different sets of potential solutions are politically feasible?
\* Are there less costly alternative policies that AI risk policymakers will have to compete with?
\* How does attention to problems by different communities affect AI risk policymakers’ actions?
\* What types of framing of policy issues are most beneficial? What types are most dangerous?
\* Is there a way to determine how framing will determine policy content?
\* What focusing events have occurred in the field of AI?
\* How can AI risk policymakers utilize focusing events to further policy agendas?
\* What effect do other organizations have on reducing the legitimacy of AI risk?
\* What can be done to respond to these counter-movements effectively? What kind of responses to objections are most convincing?
\* How many policy windows will there be for a particular issue? What does this mean for AI risk policymakers’ overall strategy?
\* What role should AI risk policy entrepreneurs play in AI governance?
\* How and where should AI risk policy entrepreneurs gain access in government?
Considerations from Policy Adoption:\* What policy alternatives are more likely to win approval to improve the odds of success for AI risk reduction?
\* What strategies can be used to improve the chances of a preferred policy to be adopted?
\* Which groups or individuals could join AI risk coalitions, what criteria are used to decide this, and what costs does them joining the coalition have?
\* What role can organizations outside of AI risk play in furthering AI risk policymakers’ agenda?
Considerations from Policy Implementation:
\* Is this solution technically feasible for governments to implement?
\* Are there enough resources, will, and support by leaders and constituency groups to be successful in implementation?
\* Is the policy crafted in a way that effectively structures incentives for the target group?
\* Is the policy unambiguous? If so, then how will that affect its ability to be implemented?
\* Are the goals of the policy in conflict with any other policy or changes in society?
\* Are there any veto points in the policy’s statutes to prevent effective implementation?
\* How will the contents or the political factors surrounding of a policy be affected during implementation?
\* Do the relevant communities accept the issue, and are they willing to devote resources to resolve it?
Considerations from Policy Evaluation:
\* Are the policy outputs having the intended outcomes?
\* What are the consequences of any unintentional outcomes?
\* What are the political factors surrounding the metrics that are being used to evaluate the policy?
\* Do the political costs or benefits of the policy have an impact on its success?
\* If the policy is terminated, will there be any negative political consequences?
\* How can AI risk policymakers update the policy? How can they prevent changes by other groups that would be harmful?
\* How will the limited time horizons of lawmakers and other groups affect the evaluation of the policy?
Author Contributions
--------------------
Conceptualization, B.P. and R.U.; methodology, B.P. and R.U.; writing—original draft preparation, B.P. and R.U.; writing—review and editing, B.P. and R.U.; project administration, B.P. and R.U.Funding
-------
This research received no external funding.Acknowledgments
---------------
The authors thank Matthijs Maas, Seth Baum, Sabrina Kavanaugh, Max Daniel, and the organizers and participants of the AI Safety Camp for useful comments and feedback.Conflicts of Interest
---------------------
The authors declare no conflict of interest.References
----------
1. Tegmark, M. Life 3.0: Being Human in the Age of Artificial Intelligence; Knopf: New York, NY, USA, 2017. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Life+3.0:+Being+Human+in+the+Age+of+Artificial+Intelligence&author=Tegmark,+M.&publication\_year=2017)]
2. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK; New York, NY, USA, 2014. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Superintelligence:+Paths,+Dangers,+Strategies&author=Bostrom,+N.&publication\_year=2014)]
3. Dafoe, A. AI Governance: A Research Agenda; Governance of AI Program, Future of Humanity Institute: Oxford, UK, 2018; Available online: (accessed on 17 December 2018).
4. Baum, S.D. A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. Global Catastrophic Risk Institute Working Paper 17-1. 2017. Available online: (accessed on 11 November 2019).
5. Everitt, T.; Lea, G.; Hutter, M. AGI Safety Literature Review. arXiv \*\*2018\*\*, arXiv:1805.01109. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=AGI+Safety+Literature+Review&author=Everitt,+T.&author=Lea,+G.&author=Hutter,+M.&publication\_year=2018&journal=arXiv)]
6. Joy, B. Why the future doesn’t need us. Wired \*\*2000\*\*, 8, 238–263. Available online: (accessed on 6 January 2019).
7. Hibbard, B. Super-Intelligent Machines; Springer: New York, NY, USA, 2002. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Super-Intelligent+Machines&author=Hibbard,+B.&publication\_year=2002)]
8. Hughes, J.J. Global technology regulation and potentially apocalyptic technological threats. In Nanoethics: The Ethical and Social Implications of Nanotechnology; Allhoff, F., Ed.; John Wiley: Hoboken, NJ, USA, 2007; pp. 201–214. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Global+technology+regulation+and+potentially+apocalyptic+technological+threats&author=Hughes,+J.J.&publication\_year=2007&pages=201%E2%80%93214)]
9. McGinnis, J.O. Accelerating AI. Northwest. Univ. Law Rev. \*\*2010\*\*, 104, 366–381. Available online: (accessed on 14 March 2019). [[CrossRef](https://doi.org/10.2139/ssrn.1593851)]
10. Scherer, M.U. Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. J. Law Technol. \*\*2016\*\*, 29, 354–398. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Regulating+artificial+intelligence+systems:+Risks,+challenges,+competencies,+and+strategies&author=Scherer,+M.U.&publication\_year=2016&journal=Harv.+J.+Law+Technol.&volume=29&pages=354%E2%80%93398&doi=10.2139/ssrn.2609777)] [[CrossRef](https://doi.org/10.2139/ssrn.2609777)]
11. Guihot, M.; Matthew, A.F.; Suzor, N.P. Nudging robots: Innovative solutions to regulate artificial intelligence. Vanderbilt J. Entertain. Technol. Law \*\*2017\*\*, 20, 385–456. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Nudging+robots:+Innovative+solutions+to+regulate+artificial+intelligence&author=Guihot,+M.&author=Matthew,+A.F.&author=Suzor,+N.P.&publication\_year=2017&journal=Vanderbilt+J.+Entertain.+Technol.+Law&volume=20&pages=385%E2%80%93456)]
12. Baum, S.D. On the promotion of safe and socially beneficial artificial intelligence. AI Soc. \*\*2017\*\*, 32, 543–551. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=On+the+promotion+of+safe+and+socially+beneficial+artificial+intelligence&author=Baum,+S.D.&publication\_year=2017&journal=AI+Soc.&volume=32&pages=543%E2%80%93551&doi=10.1007/s00146-016-0677-0)] [[CrossRef](https://doi.org/10.1007/s00146-016-0677-0)]
13. Yampolskiy, R.; Fox, J. Safety Engineering for Artificial General Intelligence. Topoi \*\*2013\*\*, 32, 217–226. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Safety+Engineering+for+Artificial+General+Intelligence&author=Yampolskiy,+R.&author=Fox,+J.&publication\_year=2013&journal=Topoi&volume=32&pages=217%E2%80%93226&doi=10.1007/s11245-012-9128-9)] [[CrossRef](https://doi.org/10.1007/s11245-012-9128-9)]
14. Erdelyi, O.J.; Goldsmith, J. Regulating Artificial Intelligence: Proposal for a Global Solution. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’18), New Orleans, LO, USA, 2–3 February 2018; Available online: (accessed on 6 January 2019).
15. Wilson, G. Minimizing global catastrophic and existential risks from emerging technologies through international law. Va. Environ. Law J. \*\*2013\*\*, 31, 307–364. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Minimizing+global+catastrophic+and+existential+risks+from+emerging+technologies+through+international+law&author=Wilson,+G.&publication\_year=2013&journal=Va.+Environ.+Law+J.&volume=31&pages=307%E2%80%93364)]
16. Shulman, C. Arms control and intelligence explosions. In Proceedings of the 7th European Conference on Computing and Philosophy (ECAP), Bellaterra, Spain, 2–4 July 2009. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Arms+control+and+intelligence+explosions&conference=Proceedings+of+the+7th+European+Conference+on+Computing+and+Philosophy+(ECAP)&author=Shulman,+C.&publication\_year=2009)]
17. Goertzel, B. The Corporatization of AI is a Major Threat to Humanity. h+ Magazine. 2017. Available online: (accessed on 6 January 2019).
18. Bostrom, N. Strategic Implications of Openness in AI Development. Glob. Policy \*\*2017\*\*, 8, 135–148. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Strategic+Implications+of+Openness+in+AI+Development&author=Bostrom,+N.&publication\_year=2017&journal=Glob.+Policy&volume=8&pages=135%E2%80%93148&doi=10.1111/1758-5899.12403)] [[CrossRef](https://doi.org/10.1111/1758-5899.12403)][[Green Version](http://onlinelibrary.wiley.com/doi/10.1111/1758-5899.12403/pdf)]
19. Dewey, D. Long-term strategies for ending existential risk from fast takeoff. In Risks of Artificial Intelligence; Müller, V.C., Ed.; CRC: Boca Raton, FL, USA, 2015; pp. 243–266. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Long-term+strategies+for+ending+existential+risk+from+fast+takeoff&author=Dewey,+D.&publication\_year=2015&pages=243%E2%80%93266)]
20. Goertzel, B. Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood? J. Conscious. Stud. \*\*2012\*\*, 19, 96. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Should+Humanity+Build+a+Global+AI+Nanny+to+Delay+the+Singularity+Until+It%E2%80%99s+Better+Understood?&author=Goertzel,+B.&publication\_year=2012&journal=J.+Conscious.+Stud.&volume=19&pages=96)]
21. Brundage, M.; Avin, S.; Clark, J.; Toner, H.; Eckersley, P.; Garfinkel, B.; Dafoe, A.; Scharre, P.; Zeitzoff, T.; Filar, B.; et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Available online: (accessed on 6 January 2018).
22. Whittlestone, J.; Nyrup, R.; Alexandrova, A.; Cave, S. The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. In Proceedings of the AAAI/ACM Conference on AI Ethics and Society, Honolulu, HI, USA, 27–28 January 2019. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Role+and+Limits+of+Principles+in+AI+Ethics:+Towards+a+Focus+on+Tensions&conference=Proceedings+of+the+AAAI/ACM+Conference+on+AI+Ethics+and+Society&author=Whittlestone,+J.&author=Nyrup,+R.&author=Alexandrova,+A.&author=Cave,+S.&publication\_year=2019)]
23. Calo, R. Artificial Intelligence Policy: A Primer and Roadmap. 2017. Available online: (accessed on 6 January 2019). It should also be noted that Calo is dismissive of the risk of artificial general intelligence.
24. Cave, S.; ÓhÉigeartaigh, S.S. An AI Race for Strategic Advantage: Rhetoric and Risks. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; Available online: (accessed on 14 March 2019).
25. Flynn, C. Personal Thoughts on Careers in AI Policy and Strategy. Effective Altruism Forum. 2017. Available online: (accessed on 6 January 2019).
26. The specifics issues will depend on the type of government. For example, the types of difficulties would be different in a democracy vs. a dictatorship. This paper however will focus on federal republics.
27. Thank you to Sabrina Kavanagh for suggesting the idea that the policy process could inspire new ideas for AI governance researchers.
28. Brundage, M.; Bryson, J. Smart Policies for Artificial Intelligence. arXiv \*\*2016\*\*, arXiv:1608.08196. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Smart+Policies+for+Artificial+Intelligence&author=Brundage,+M.&author=Bryson,+J.&publication\_year=2016&journal=arXiv)]
29. Lowi, T.J. Four Systems of Policy, Politics, and Choice. Public Adm. Rev. \*\*1972\*\*, 32, 298–310. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Four+Systems+of+Policy,+Politics,+and+Choice&author=Lowi,+T.J.&publication\_year=1972&journal=Public+Adm.+Rev.&volume=32&pages=298%E2%80%93310&doi=10.2307/974990)] [[CrossRef](https://doi.org/10.2307/974990)]
30. Anderson, J.E. Public Policymaking: An Introduction, 7th ed.; Cengage Learning: Boston, MA, USA, 2010. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Public+Policymaking:+An+Introduction&author=Anderson,+J.E.&publication\_year=2010)]
31. Zahariadis, N. The Multiple Streams Framework: Structure, Limitations, Prospects. In Theories of the Policy Process, 2nd ed.; Sabatier, P., Ed.; Westview Press: Boulder, CO, USA, 2007. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Multiple+Streams+Framework:+Structure,+Limitations,+Prospects&author=Zahariadis,+N.&publication\_year=2007)]
32. Cobb, R.; Elder, C.D. What is an Issue? What Makes an Issue? In Participation in American Politics: The Dynamics of Agenda Building; Johns Hopkins University Press: Baltimore, MD, USA, 1983; pp. 82–93. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=What+is+an+Issue?+What+Makes+an+Issue?&author=Cobb,+R.&author=Elder,+C.D.&publication\_year=1983&pages=82%E2%80%9393)]
33. Allen, G. China’s Artificial Intelligence Strategy Poses a Credible Threat to U.S. Tech Leadership. Center for Foreign Affairs Blog. Available online: (accessed on 26 February 2019).
34. Yampolskiy, R. Current State of Knowledge on Failures of AI Enabled Products. Report. Consortium for Safer AI. 2018. Available online: (accessed on 6 January 2018).
35. Danzig, R. Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority; Center for New American Security: Washington, DC, USA, 2018; Available online: (accessed on 24 March 2019).
36. May, P.J. Reconsidering Policy Design: Policies and Publics. J. Public Policy \*\*1991\*\*, 11, 187–206. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Reconsidering+Policy+Design:+Policies+and+Publics&author=May,+P.J.&publication\_year=1991&journal=J.+Public+Policy&volume=11&pages=187%E2%80%93206&doi=10.1017/S0143814X0000619X)] [[CrossRef](https://doi.org/10.1017/S0143814X0000619X)]
37. Russell, S.; Aguirre, A.; Conn, A.; Tegmark, M. Why You Should Fear “Slaughterbots”—A Response. IEEE Spectrum. 2018. Available online: (accessed on 9 January 2019).
38. Yudkowsky, E. Cognitive Biases Potentially Affecting Judgment of Global Risks. In Global Catastrophic Risks; Bostrom, N., Ćirković, M.M., Eds.; Oxford University Press: New York, NY, USA, 2008; pp. 91–119. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Cognitive+Biases+Potentially+Affecting+Judgment+of+Global+Risks&author=Yudkowsky,+E.&publication\_year=2008&pages=91%E2%80%93119)]
39. Baum, S.D. Superintelligence Skepticism as a Political Tool. Information \*\*2018\*\*, 9, 209. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Superintelligence+Skepticism+as+a+Political+Tool&author=Baum,+S.D.&publication\_year=2018&journal=Information&volume=9&pages=209&doi=10.3390/info9090209)] [[CrossRef](https://doi.org/10.3390/info9090209)]
40. James, T.L.; Jones, B.D.; Baumgartner, F.R. Punctuated-Equilibrium Theory: Explaining Stability and Change in Public Policymaking. In Theories of the Policy Process, 2nd ed.; Sabatier, P.A., Ed.; Westview Press: Boulder, CO, USA, 2007; Chapter 6. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Punctuated-Equilibrium+Theory:+Explaining+Stability+and+Change+in+Public+Policymaking&author=James,+T.L.&author=Jones,+B.D.&author=Baumgartner,+F.R.&publication\_year=2007)]
41. Sabatier, P.; Weiblle, C.M. An Advocacy Coalition Framework. In Theories of the Policy Process, 2nd ed.; Sabatier, P.A., Ed.; Westview Press: Boulder, CO, USA, 2007; Chapter 7. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=An+Advocacy+Coalition+Framework&author=Sabatier,+P.&author=Weiblle,+C.M.&publication\_year=2007)]
42. McLaughlin, M.W. Learning From Experience: Lessons From Policy Implementation. Educ. Eval. Policy Anal. \*\*1987\*\*, 9, 171–178. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=Learning+From+Experience:+Lessons+From+Policy+Implementation&author=McLaughlin,+M.W.&publication\_year=1987&journal=Educ.+Eval.+Policy+Anal.&volume=9&pages=171%E2%80%93178&doi=10.3102/01623737009002171)] [[CrossRef](https://doi.org/10.3102/01623737009002171)][[Green Version](http://journals.sagepub.com/doi/pdf/10.3102/01623737009002171)]
43. Farquhar, S. Changes in Funding in the AI Safety Field. 2017. Available online: (accessed on 6 January 2019).
44. MacAskill, W. What Are the Most Important Moral Problems of Our Time? TED Talk. 2018. Available online: (accessed on 6 January 2019).
45. Müller, V.; Bostrom, N. Future progress in artificial intelligence: A Survey of Expert Opinion. In Fundamental Issues of Artificial Intelligence; Müller, V.C., Ed.; Synthese Library; Springer: Berlin, Germany, 2014; Available online: (accessed on 6 January 2019).
46. Grace, K.; Salvatier, J.; Dafoe, A.; Zhang, B.; Evans, O. When Will AI Exceed Human Performance? Evidence from AI Experts. arXiv \*\*2017\*\*, arXiv:1705.08807. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=When+Will+AI+Exceed+Human+Performance?+Evidence+from+AI+Experts&author=Grace,+K.&author=Salvatier,+J.&author=Dafoe,+A.&author=Zhang,+B.&author=Evans,+O.&publication\_year=2017&journal=arXiv&doi=10.1613/jair.1.11222)] [[CrossRef](https://doi.org/10.1613/jair.1.11222)]
47. Zhang, B.; Dafoe, A. Artificial Intelligence: American Attitudes and Trends. January 2019. Available online: (accessed on 3 January 2019).
48. Sabatier, P.; Mazmanian, D. The Conditions of Effective Implementation: A Guide to Accomplishing Policy Objectives. Policy Anal. \*\*1979\*\*, 5, 481–504. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Conditions+of+Effective+Implementation:+A+Guide+to+Accomplishing+Policy+Objectives&author=Sabatier,+P.&author=Mazmanian,+D.&publication\_year=1979&journal=Policy+Anal.&volume=5&pages=481%E2%80%93504&pmid=10244415)] [[PubMed](http://www.ncbi.nlm.nih.gov/pubmed/10244415)]
49. Sabatier, P.; Mazmanian, D. The Implementation of Public Policy: A Framework of Analysis. Policy Stud. J. \*\*1980\*\*, 8, 538–560. [[Google Scholar](https://scholar.google.com/scholar\_lookup?title=The+Implementation+of+Public+Policy:+A+Framework+of+Analysis&author=Sabatier,+P.&author=Mazmanian,+D.&publication\_year=1980&journal=Policy+Stud.+J.&volume=8&pages=538%E2%80%93560&doi=10.1111/j.1541-0072.1980.tb01266.x)] [[CrossRef](https://doi.org/10.1111/j.1541-0072.1980.tb01266.x)]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ().
|
f489cc4b-5696-4b24-a5ef-0f433ff8c4ae
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Net Utility and Planetary Biocide
I've started listening to the audiobook of Peter Singer's Ethics in the Real World, which is both highly recommended and very unsettling. The essays on non-human animals, for example, made me realize for the first time that it may well be possible that the net utility on Earth over all conscious creatures is massively negative.
Naturally, this led me to wonder whether, after all, efforts to eradicate all consciousness on Earth - human and non-human - may be ethically endorsable.This, in turn, reminded me of a recent post on LW asking whether the possibility of parallelized torture of future uploads justifies killing as many people as possible today.
I had responded to that post by mentioning that parallelizing euphoria was also possible, so this should cancel things out. This seemed at the time like a refutation, but I realized later I had made the error of equating the two, utility and disutility, as part of the same smooth continuum, like [-100, 100] ∈ R. There is no reason to believe the maximum disutility I can experience is equal in magnitude to the maximum utility I can experience. It may be that max disutility is far greater. I really don't know, and I don't think introspection is as useful in answering this question as it seems intuitively to be, but it seems quite plausible for this to be the case.
As these thoughts were emerging, Singer, as if hearing my concerns, quoted someone or other who claimed that the human condition is one of perpetual suffering, constantly seeking desires which, once fulfilled, are ephemeral and dissatisfying, and therefore it is a morally tragic outcome for any of us to have emerged into existence.
Of course these are shoddy arguments in support of Mass Planetary Biocide, even supposing the hypothesis that the Earth (universe?) has net negative utility is true. For one, we can engineer minds somewhere in a better neighborhood of mindspace, where utility is everywhere positive. Or maybe it's impossible even in theory to tre
|
1f0cd1f1-ea07-4a47-883f-10b747467e91
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Ukraine Post #1: Prediction Markets
I am working on seeing how comprehensively I can cover and discuss the war, but that takes time and the speed premium is super high. I decided to start here, and write quickly. Apologies in advance for any mistakes, oversights, dumbness, and so on.
While I strive to provide sufficient Covid-19 news that you need not check other sources (and I will continue to do that), I will not be making any such claims regarding Ukraine even if I make regular posts. Please do not rely on me for such news.
The goal of this first post is to start our bearings with what prediction markets are available, what we might learn from them, and what markets might make them more useful.
Metaculus
Unfortunately, prediction markets so far have let us down when we need them most.
That is because the real money markets have so far been mostly unwilling or unable to touch questions surrounding the war, so most (but not all) of the markets we have to work with are on Metaculus (or Manifold Markets but that’s a mess).
Metaculus has some useful questions being asked, but Metaculus works by aggregation of all predictions. Even if you ignore their other issues, when something happens, the market is simply not going to adjust quickly.
This proxy market in particular seems illustrative.
This is on some level a profoundly silly question. The answer is not automatically 33%. As time outside the window goes by the conditional probability goes up, as time inside the window goes by it goes down. There could easily be seasonal effects as well. If provocative actions especially wars are more likely to happen during better weather, chances of war might be higher during the summer. So I’d be inclined, absent the Ukraine situation, to put this at 40%-45%, although I have no idea what you would do with that information.
Except now. Now we have the Ukraine war. If the nukes do fly, it seems like that would mostly happen before summer. Things are moving quickly. Thus, to the extent that there is a non-triv
|
b1de0490-1e0d-462d-bf99-132e715e32fe
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : First San Diego, CA, USA meetup
Discussion article for the meetup : First San Diego, CA, USA meetup
WHEN: 31 July 2011 01:00:00PM (-0700)
WHERE: 6380 Del Cerro Blvd. San Diego, CA 92120
We're holding what I believe is the first San Diego meetup on Sunday, July 31st starting at 1pm at the K&B Wine Cellars near San Diego State University:
6380 Del Cerro Blvd. San Diego, CA 92120
The phone number for the place is 619-286-0884. This is one of a number of places along a strip that's attached to a grocery store of sorts. It's something like a coffee house only with beer, wine, & liquor instead of coffee. (Underage attendees should be fine; you just won't be able to get alcohol. There's food and some non-alcoholic drinks if you like.) We're meeting in a semi-hidden room in the far back. When you walk in, go as straight as you can while staying close to the left wall.
This will be an introductory meeting so that those in the San Diego area can meet one another. We'll talk about what we want to get out of these meetups and hammer out some specific plans for how to accomplish that. From some initial conversations, it sounds like we'll have monthly meetups, though that stands a fair chance of changing depending on what we discuss here.
Feel free to bring friends, significant others, or anyone else who's interested in rationality. Also, give some thought to what you'd like out of these meetups. It doesn't have to be profound; camaraderie or "I don't know" are fine answers. But if you give it a bit of thought ahead of time, you might find it easier to envision and articulate more precisely what it is that you'd like to see these meetups become.
I should also mention that this location has a projector setup, so if there's something you'd like to share PowerPoint style, feel free to bring that. I haven't gotten details from the restaurant as yet about how to use the projector setup (e.g. is it transparencies or a laptop hookup?), but I'll edit in that clarification once I get it.
Let me know if you have
|
30ed8403-2985-4654-b88b-2d33346fc721
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Israel (Tel Aviv) Meetup: Fun and Games with Cognitive Biases
Discussion article for the meetup : Israel (Tel Aviv) Meetup: Fun and Games with Cognitive Biases
WHEN: 01 August 2013 08:00:00PM (+0300)
WHERE: Adventures Garden (Gan Harpatkaot), Tel Aviv, Israel
This meetup is going to be about Cognitive Biases, a whole bunch of socializing, and a rump session.
We will be meeting next to the Adventure Park in Park Hayarkon in Tel Aviv at 8pm.
Here: https://ssl.panoramio.com/photo/19706559
Its next to "Theatre in the park".
The main talk will be:
Fun and Games with Cognitive Biases / Gal Hochberg Exploring and brainstorming cognitive biases, how to recognize them, how to avoid them and how to use them to win. Based on an NYC LW meetup session.
A rump session is (to those unfamiliar with the concept): Each participant will give a 4-minute talk (+3 minute encore if we applaud hard enough). Giving a talk isn't mandatory, but it's highly recommended. Not confident that what you have to say is relevant to our interests? Unsure about your public speaking skills? Doesn't matter - in the rump session, anything goes.
The schedule:
| Start | End | What |
|---------+-------+--------------------------------------|
| 20:00 | 20:15 | Assembly |
| 20:15 | 21:00 | Main Talk |
| 21:00 | 22:00 | Dinner & Discussion |
| 22:00 | 23:00 | Rump Session (minitalks) |
| 23:00 | ??:?? | End of official programming |
See you all there!
Discussion article for the meetup : Israel (Tel Aviv) Meetup: Fun and Games with Cognitive Biases
|
27b87319-c3bb-46c7-bf67-95f0375b5b1b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
EY "Politics is the Mind Killer" sighting at Washington Examiner and Reason.com
Original at Washington Examiner
http://washingtonexaminer.com/down-with-politics/article/2508882#.UGSscI0iYZm
> ...
>
> Politics makes us worse because "politics is the mindkiller," as intelligence theorist Eliezer Yudkowsky puts it. "Evolutionary psychology produces strange echoes in time," he writes, "as adaptations continue to execute long after they cease to maximize fitness." We gorge ourselves sick on sugar and fat, and we indulge our tribal hard-wiring by picking a political "team" and denouncing the "enemy."
>
> But our atavistic Red/Blue tribalism plays to the interests of "individual politicians in getting you to identify with them instead of judging them," Yudkowsky writes.
>
> ...
>
> Examiner Columnist Gene Healy is a vice president at the Cato Institute and the author of "The Cult of the Presidency."
Repost at Reason.com
http://reason.com/archive/2012/09/25/why-politics-are-bad-for-us
|
e8eece3f-215a-4a17-aa73-05b4328a1c85
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Thoughts on the Drake Equation and the Great Filter
I was originally going to post this as a comment into the UFAI & great filter thread, but since I noticed that my comment didn't include a single word of AIs I thought about making an entire new discussion thread and I continued writing to improve the quality from comment to post. The essay is intended as thought-provoking and I don't have the required knowledge in the related fields and I mostly pieced this together by browsing wikipedia, but hopefully it gets you thinking!
Personally I think when considering the Drake Equation it's important to note that it actually took ridiculously long for intelligent life to evolve here and that we're on a finite timeline. The drake equation contains the rate of star formation, the number of planets in the stars, it even has a variable for the time it takes for life to evolve to the point of signaling detectably into outer space, etc. but it's also important to pay attention to that the average setup of the universe has changed.
On earth life has existed for almost 4 billion years and it has only been 43 years since our civilization first visited the moon and ~1½ centuries since the invention of radio? That is a very small time frame. Particularly if we consider that ~4 billion years is between a quarter and a third of the age of the universe itself.
When we consider the Great Filter we can at least propose that there have been several mass extinction events which failed to end all life on earth. I think it's a valid argument to say that for an example any powerful impact could have ended all life or reset the evolution of life some/any number of degrees - and it has been ~70 years since the initiation of the Manhattan project and already humanity has the potential to go through a thermonuclear war that could end human life on the planet, or rollback the game of life through nuclear winter. Mars could have been habitable. For an example there's no liquid water on Mars now, though there should've been earlier. The habitab
|
5e6ac143-d2ba-44a5-8fdb-5ff1c1b89eb1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
So You Want to Colonize The Universe Part 5: The Actual Design
Alright, here's the actual design for an intergalactic mission to the Virgo Supercluster.
(1, 2, 3, 4)
----------------------------------------
Phase 1: Acceleration
To begin with, you use really big lightsails and exawatt dyson-swarm-powered laser arrays to get your fleet of 30 or so ships (really just a cylinder of some fancy graphite-based dust-impact-resistant material that weighs about 1/5 of the Titanic, and has about a 20 meter radius) up to cruising speed of 0.9 c for their 200-million-light-year voyage across the intergalactic void to the Virgo Supercluster, or at least where it's projected to be in the future by cosmic evolution simulations.
Phase 2: Coasting
By time dilation, this is dropped to 100 million years of waiting in an absolutely black void between the galaxies, where nothing of note happens except for occasional nanobot repairs, and keeping the antimatter at 0.1 K. And most of the fleet dies because they got hit by a grain of sand that's out in the galactic void for some improbable reason, but over those sorts of distances, even very improbable sand grains will show up at some point. However, several of them probably make it through, with the front looking pretty moth-eaten.
Phase 3: Target Selection and the Steering Burn
At a few tens or hundreds of thousands of lightyears out, the next phase can begin. Telescopic monitoring of the incoming galaxy, to build up a map of where the stars will be upon arrival, and the interstellar density distribution, and pick a good-looking one. Sticking a telescope out in front leads to the sensors getting destroyed by the proton flux, so they'll probably be shielded at the bottom of a tube of solid-but-transparent material.
Steering to the appropriate star location is done by a dusty-plasma-fission rocket firing sideways, which provides 200 newtons of thrust (equivalent to a model rocket engine), and emits 3.5 gigawatts of waste heat. For thrust that low with that much energy, the exhaust must be goin
|
199aa52b-aa0f-46ef-aea5-399f0b831673
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Conjunction Controversy (Or, How They Nail It Down)
Followup to: Conjunction Fallacy
When a single experiment seems to show that subjects are guilty of some horrifying sinful bias - such as thinking that the proposition "Bill is an accountant who plays jazz" has a higher probability than "Bill is an accountant" - people may try to dismiss (not defy) the experimental data. Most commonly, by questioning whether the subjects interpreted the experimental instructions in some unexpected fashion - perhaps they misunderstood what you meant by "more probable".
Experiments are not beyond questioning; on the other hand, there should always exist some mountain of evidence which suffices to convince you. It's not impossible for researchers to make mistakes. It's also not impossible for experimental subjects to be really genuinely and truly biased. It happens. On both sides, it happens. We're all only human here.
If you think to extend a hand of charity toward experimental subjects, casting them in a better light, you should also consider thinking charitably of scientists. They're not stupid, you know. If you can see an alternative interpretation, they can see it too. This is especially important to keep in mind when you read about a bias and one or two illustrative experiments in a blog post. Yes, if the few experiments you saw were all the evidence, then indeed you might wonder. But you might also wonder if you're seeing all the evidence that supports the standard interpretation. Especially if the experiments have dates on them like "1982" and are prefaced with adjectives like "famous" or "classic".
So! This is a long post. It is a long post because nailing down a theory requires more experiments than the one or two vivid illustrations needed to merely explain. I am going to cite maybe one in twenty of the experiments that I've read about, which is maybe a hundredth of what's out there. For more information, see Tversky and Kahneman (1983) or Kahneman and Frederick (2002), both available online, from whic
|
b0f81848-ede6-4db9-b7d1-3d2f02c3d26a
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Alignment via prosocial brain algorithms
In this post, I want to briefly propose a semi-novel direction for alignment research that I'm excited about. Though some of these ideas are not brand new—they purposefully bear resemblance to recent (highly promising) work in [SHARD theory](https://www.lesswrong.com/posts/xqkGmfikqapbJ2YMj/shard-theory-an-overview) and [Steve Byrnes’s approach](https://www.lesswrong.com/s/HzcM2dkCq7fwXBej8) to brain-based AGI safety—I think my emphases are sufficiently different so as to justify a more thorough explanation.
Why are humans 'worthy' of being in the loop?
---------------------------------------------
I think the following three claims help motivate the general research direction I have in mind.
1) Many of the most coherent AI safety strategies proposed to date (e.g., [HCH, imitative and approval-based amplification, recursive reward modeling, and more](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai?_ga=2.167775969.1942859760.1662778050-1825036065.1632882070)) involve human decision-makers in some meaningful capacity. I claim, therefore, that these proposals implicitly presuppose that there are specific algorithmic properties of the human mind/brain that make us comfortable entrusting these ‘humans in the loop’ with the task of minimizing the likelihood of AI-induced bad outcomes. This idea is demonstrated especially clearly by ‘safety via debate,’ for instance:
Diagram from [An overview of 11 proposals for building safe advanced AI](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai), with my annotation in black.2) I think the special brain algorithms in question—e.g., the ones that make us comfortable entrusting a neurotypical human to decide who won in the set-up above—are more familiarly thought of as *prosocial* or *moral* *cognition*. A claim like this would predict that we would be uncomfortable entrusting humans who *lacked* the relevant prosocial instincts (e.g., psychopaths) to oversee a safety-via-debate-type set-up, which seems correct. I think the reason that it is a very natural thought to want to incorporate neurotypical human decision-makers into alignment proposals is that we are confident (enough) that such decisions will be made *carefully*—or at least *more* *carefully* than if there were no humans involved. In other words, individual humans in the loop are entrusted-by-default to serve as competent advocates for the interests of society at large (and who are more than likely aware of the fact that they are serving this role), able to infer suspicious behavior, evaluate subtle short- and long-term predicted consequences, be humble about said evaluations, probably solicit second opinions, etc.—somehow!
3) Our understanding of human prosocial cognition is growing increasingly precise and predictive. In cognitive neuroscience *writ large*, computational modeling has become a dominant approach to understanding the algorithms that the human brain instantiates, with talented researchers like [Joshua Tenenbaum](https://scholar.google.com/citations?user=rRJ9wTJMUB8C&hl=en&oi=sra), [Robb Rutledge](https://scholar.google.com/citations?user=xdLLB0MAAAAJ&hl=en), and [Anne Collins](https://scholar.google.fr/citations?user=JIfeqbMAAAAJ&hl=fr) leading the charge. This work has taken off in recent years and has enabled cognitive scientists to define and test hypotheses that are unprecedented in their mathematical precision and predictive power. Here are two good introductory ([1](https://journals.sagepub.com/doi/full/10.1177/0963721415624012?casa_token=L8EFTWduLz0AAAAA%3AA0pN1y6w2LfYb_8oZGy_KHLclC3twNPnAV2aBPnDrF0-JTXqwnb2TfKSKkygdh9C0t8wBRFpglsQ7w), [2](https://www.pnas.org/doi/abs/10.1073/pnas.1603198113)) examples of this sort of work that pertain explicitly to human prosocial cognition. I expect my future writing—and that of others interested in this sort of approach—to feature far more examples of good alignment-relevant computational social neuroscience research.
With these ideas in mind, my proposal is to conduct technical research to better understand these prosocial brain algorithms with the ultimate goal of instantiating some refined version of them directly into a future AGI (which is likely doing something approximating model-based RL). Here is a pretty straightforward two-step plan for doing so:
* Synthesize high-quality social neuroscience literature—with a particular focus on good computational modeling work—in order to more fully develop a rigorous account of the most important algorithms underlying human prosocial behavior.
* Develop specific corrigibility proposals for instantiating these algorithms in AI systems in a maximally realistic and competitive manner.
In other words, I'm basically proposing that **we try to better understand the properties of human cognition that render humans 'worthy of being in the loop,' and proceed to apply this understanding by instantiating the relevant computations directly into our eventual AGI.** If successful, such an approach might even obviate the need for having a human in the loop in the first place—'cutting out the middleman,' as it were.
Responses to some anticipated objections
----------------------------------------
Perhaps this plan sounds like a long shot to you! Here are some plausible reasons I think one might be skeptical of such an approach, followed by what I hope are plausible responses.
### Plausible critique #1: Human value-based cognition/moral reasoning ain’t all that.
Humans are jealous, status-focused, painfully short-sighted, perennially overconfident, and often ethically confused—why would we ever want to instantiate the cognitive machinery that gives rise to these features in a superintelligent AI?
One very simple reply to this concern is *we expect to be able to pick and choose from the set of prosocial brain algorithms*—e.g., we might want to emulate the computational machinery that gives rise to altruistic motivations, but we'll probably skip out on whatever computational machinery gives rise to envy.
I honestly suspect that this reply might be *too* simple—that a non-trivial number of prosocial brain algorithms exhibit a double-edged quality. For instance, if you want an AGI that exhibits the capacity to truly love its neighbor, this may *necessarily* imply an obverse capacity to, say, hate the person who kills its neighbor. Moral cognition is often messy in this way. (I definitely recommend Paul Bloom’s book [Against Empathy](https://en.wikipedia.org/wiki/Against_Empathy) for more on this general point.) To the degree it *is* possible to maximally sample the good and minimally sample the bad from prosocial brain algorithms, I obviously think we should do so. I am just skeptical of how far such an approach would take us in avoiding the ‘human values are too grey-area to copy’ critique.
Ultimately, I think that the following is a stronger reply to this worry: the fact that human brains are able to criticize their own cognitive architecture, *as is evidenced by this very criticism,* is strong evidence for the plausibility of leveraging prosocial brain algorithms for alignment.
In other words, the fact that prosocial algorithms in brains recursively accommodate the fact that prosocial algorithms in brains are sometimes suboptimal (e.g., the capacity to think thoughts like “loving one's neighbor is a double-edged sword”) is, I claim, a *highly desirable property of prosocial brain algorithms*. **This ability to critically inspect one’s own values may well be the*****most*** **important prosocial algorithm to pin down!** Prosocial brains don’t just do things like share resources and shun cheaters—they challenge their own preconceived assumptions, they are motivated to self-improve, and they often are aware of their own shortcomings. The EA community is perhaps one of the better examples of this particular sort of prosocial behavior on abundant display. (Notice also that these specific self-awareness-type properties seem to do a lot of explanatory work for understanding why we trust humans to, say, mediate a safety-via-debate set-up.) In essence, the ‘*human moral reasoning ain’t all that*’ critique ignores that human moral reasoning is itself responsible for generating this critique!
### Plausible critique #2: Human values attached to superintelligence may look far more dangerous than human values attached to human-level intelligence.
I think there are two important and related responses to this. The first is [orthogonality](https://www.lesswrong.com/tag/orthogonality-thesis)—i.e., that an agent’s intelligence and an agent’s goals/values (roughly speaking) vary independently. If this is true, then it follows that for any set of goals/values, we generally should not expect that increasing the intelligence of the agent who holds these goals/values will categorically modulate what these goals/values look like when implemented (so long as the agent is originally competent enough to actually achieve its goals/values). I think there is moderate empirical evidence to support this notion: namely, that increased intelligence *within* *humans* has [no meaningful correlation](https://link.springer.com/article/10.1007/s12144-012-9133-6) to prosocial behavior (there are [some studies](https://psycnet.apa.org/record/1993-21273-001) that even report a weak *positive* correlation between human intelligence and prosociality, especially in morally complex decision domains—which makes sense). This seems to demonstrate that whatever brain algorithms motivate prosociality are not significantly altered by increases in general intelligence. Young neurotypical children (and even chimpanzees!) [instinctively help others accomplish their goals when they believe they are having trouble doing so alone](https://pubmed.ncbi.nlm.nih.gov/16513986/)—as do most people on the far right tail of the IQ distribution. This all suggests to me that ‘attaching’ the right implementations of the right prosocial algorithms to generally intelligent systems should not suddenly render unrecognizable the class of behavior that emerges as a result of these algorithms.
### Plausible critique #3: AGI won't be doing anything that looks like RL, so this sort of approach is basically useless!
I think understanding human prosociality in computational terms would be useful for alignment even if what is discovered cannot be instantiated straightforwardly into the AGI. For example, understanding the computational underpinnings of the human motivation to communicate honestly may clue us into the fact that some model is [deceptively aligned](https://www.lesswrong.com/tag/deceptive-alignment).
More to the point, however, I think it *is* [sufficiently likely](https://www.alignmentforum.org/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent) that AGI will be doing something like RL such that proceeding under this assumption is a reasonable bet. Many of the qualities that seem necessary for AGI—planning, acting competently in a changing world, solving complex problems, pursuing goals, communicating coherently, etc.—also seem like they will have to leverage reinforcement learning in some form. (I'm definitely in favor of other researchers proceeding under different assumptions about the AGI's foundational learning algorithm(s), too, as I discuss at length [here](https://www.alignmentforum.org/s/iWnHtRB5ucqPjjDmv/p/snwpyAfzoFKdfnEDj).)
In summary, then, I think this critique doesn't quite work for two reasons—(1), even if we can't instantiate prosocial algorithms directly into the AGI, understanding them would still be useful for alignment, and (2), it is likely enough that AGI will be doing RL that 'let's-intelligently-interface-with-its-values'-style approaches are a worthwhile bet.
### Plausible critique #4: We are simply too ignorant about the computational underpinnings of our values/moral decision-making to pursue this sort of research trajectory. The brain is just too damn complicated!
I won’t say too much about this critique because I’ve already shared some specific resources that cut against this sort of framing (see 3 above!). In general, it seems to me that many alignment researchers come from strong technical computer science backgrounds and may therefore be less familiar with the progress that has been made in recent decades in cognitive science. In general, I think that perceptions like (a) the brain is a giant black box, (b) cognitive science is an unbearably ‘soft’ science, etc. are sorely outdated. The field has produced high-quality, falsifiable, and increasingly mathematical models of complex cognitive processes (i.e., models that make algorithmic sense of psychological *and* neurofunctional processes in one fell swoop), and leveraging these contributions when attempting to solve the alignment problem—the problem of getting *powerful cognitive systems of our own making* to behave themselves—seems essential in my view.
Conclusion
----------
In general, I’d like to think of this research trajectory as an especially important subset of Steve Byrnes’s proposal to [reverse-engineer human social instincts](https://www.lesswrong.com/s/HzcM2dkCq7fwXBej8/p/tj8AC3vhTnBywdZoA#15_2_1_2_The__Reverse_engineer_human_social_instincts__research_program________). I'm in strong agreement that [Humans provide an untapped wealth of evidence about alignment](https://www.lesswrong.com/posts/CjFZeDD6iCnNubDoS/humans-provide-an-untapped-wealth-of-evidence-about), I'm very excited that there are yet [others](https://www.lesswrong.com/posts/c2tEfqEMi6jcJ4kdg/brain-like-agi-project-aintelope) who are beginning to work concretely on this subproblem, and I'm hopeful that yet more people join in pursuing this sort of alignment research direction!
I’d like to think that contained within the complex computational processes of the human brain exists a robust and stable solution to the problem of getting a generally intelligent agent to avoid existentially risky behavior, and I am excited to pursue this research agenda with this kind of brain-based treasure hunt in mind.
|
5e67e6ed-879b-47d4-a2e7-034f0f7be3e2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What's up with "Responsible Scaling Policies"?
habryka
I am interested in talking about whether RSPs are good or bad. I feel pretty confused about it, and would appreciate going in-depth with someone on this.
My current feelings about RSPs are roughly shaped like the following:
I feel somewhat hyper-alert and a bit paranoid about terms around AI X-Risk getting redefined, since it feels like a thing that has happened a bunch with "AI Alignment" and is also the kind of thing that happens a lot when you are trying to influence large bureaucratic institutions (see also all the usual Orwell stuff on what governments do here). A good chunk of my concerns about RSPs are specific concerns about the term "Responsible Scaling Policy".
I also feel like there is a disconnect and a bit of a Motte-and-Bailey going on where we have like one real instance of an RSP, in the form of the Anthropic RSP, and then some people from ARC Evals who have I feel like more of a model of some platonic ideal of an RSP, and I feel like they are getting conflated a bunch. Like, I agree that there are things that are kind of like RSPs that could be great, but I feel like the Anthropic RSP in-particular doesn't really have any teeth and so falls a bit flat as the kind of thing that is supposed to help with risk.
ryan_greenblatt
Disclaimer: I don't primarily work on advocacy or policy and it's plausible that if I worked in these areas more, my takes on these topics would update substantially. That said, a large fraction of my work does involve thinking about questions like "What would good safety arguments look like?" and "With the current state of safety technology, what can we do to assess and mitigate AI risk?". (This was added in editing.)
----------------------------------------
Stuff which is maybe interesting to talk about:
* What should anti-takeover/safety advocacy do?
* How much can current tech actually reduce risk? Is this a crux?
* What would current interventions for avoiding takeover look like?
* Does timing pauses matte
|
861792ec-9d15-45cf-999f-58fc83461d09
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Newcomb's problem happened to me
Okay, maybe not me, but someone I know, and that's what the title would be if he wrote it. Newcomb's problem and Kavka's toxin puzzle are more than just curiosities. Like a lot of thought experiments, they approximately happen. They make the issues with causal decision theory relevant, not only to designing artificial intelligence, but to our everyday lives as well.
Yet somehow it isn't mainstream knowledge that these are more than merely abstract linguistic issues, as evidenced by this comment thread (please no Karma sniping of the comments, they are a valuable record). Scenarios involving brain scanning, decision simulation, etc., can establish their validy and future relevance, but not that they are already commonplace. I want to provide an already-happened, real-life account that captures the Newcomb essence.
So let's say my friend is named Joe. In his account, Joe is very much in love with this girl named Omega… er… Kate, and he wants to get married. Kate is somewhat traditional, and won't marry him unless he proposes, not only in the sense of explicitly asking her, but also expressing certainty that he will never try to leave her if they do marry.
At this point, many of you could easily make up a simple conclusion to this post. As such, I want to convey the actual account, in which Joe's beliefs are roughly schematized as follows:
1. if he proposes sincerely, she is effectively sure to believe it.
2. if he proposes insincerely, she will 50% likely believe it.
3. if she believes his proposal, she will 80% likely say yes.
4. if she doesn't believe his proposal, she will surely say no, but will not be significantly upset in comparison to the significance of marriage.
5. if they marry, Joe will 90% likely be happy, and will 10% likely be unhappy.
He roughly values the happy and unhappy outcomes oppositely:
1. being happily married to Kate: 125 megautilons
2. being unhapily married to Kate: -125 megautilons.
So what should he do? What sh
|
568c88b9-6caf-4021-8cce-0e4bf026abcf
|
trentmkelly/LessWrong-43k
|
LessWrong
|
More stuff
A few more bits I liked in The Stuff of Thought:
Hypernyms elevate
Labeling someone with a small aspect of what they are – a trait or part – undignifies them. Calling someone a cripple, the blonde, a suit, isn’t nice. The opposite works too often – things sound more dignified if you label them with a larger category than usual. Driving machines and dental cleaning systems sound more pretentious than cars and toothbrushes.
Lots of our phrases rest on the same conceptual metaphors
Though we don’t have a specific saying that ‘up is like good and down is like bad’, it’s easy to see that we equate these things from our endless sayings that spring from this metaphor. Feeling high, spirits soaring, hitting rock bottom, a downturn, pick me up, low mood, low character, low blow, feeling down, over the moon, I’m above you. I can make up new phrases using the same metaphor and you will know what I mean without apparently thinking about it. These things suggest that the connection between goodness and upness is still active in our minds; these things aren’t idioms.
Intuitions that phrases like ‘pin the wall with posters’ are wrong follow simple rules that we are introspectively oblivious to.
You can say ‘splatter paint on the wall’ or ‘splatter the wall with paint’. You can say ‘pin posters on the wall’. This seems analogous to ‘splatter paint on the wall’, so why don’t we use the same alternative form with that?
The answer is that the first form implies that you were changing the paint or the poster by putting it on the wall, whereas the second form implies that you were changing the wall by putting paint or a poster on it. Painting a wall changes the nature of the wall in our eyes, while pinning posters on it doesn’t.
This explanation holds across the many other examples of this pattern, and similar explanations hold for others. You photograph a wall with your camera, but don’t photograph your camera at the wall. You fling a cat into a room, but you don’t fling a ro
|
449e35d4-94cb-43fe-9331-5155c0980d27
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Exercising Rationality
Or: Why thinking about blue tentacle arms is not always a waste of time.
0. Introduction
As I work through the Sequences, I find myself disagreeing—slightly—with a point Eliezer Yudkowsky makes in A Technical Explanation of Technical Explanation:
> Imagine that you wake up one morning and your left arm has been replaced by a blue tentacle. The blue tentacle obeys your motor commands—you can use it to pick up glasses, drive a car, etc. How would you explain this hypothetical scenario? Take a moment to ponder this puzzle before continuing.
I took some time to think about it. Then I felt a bit betrayed when he continued:[1]
> How would I explain the event of my left arm being replaced by a blue tentacle? The answer is that I wouldn’t. It isn’t going to happen.
Eliezer argues that a "good explanation" should be one that, if considered beforehand, would make us assign real probability to an event occurring. Since no possible explanation for waking up with a tentacle arm can make us genuinely expect it, he dismisses the question as meaningless:
> I do not expect to ever encounter this hypothetical experience, and therefore I cannot explain, nor have I a motive to try.
I agree. Mostly. If all we care about is predictive power, we need not concern ourselves with events of infinitesimal probability. But there are other consequences of exercising rationality: We might get better at being rational.
1. The Limits of Prediction
No rationalist, no matter how well-calibrated their estimates or extensive their forethought, is able to anticipate everything that may happen to them. This means not just that that we cannot predict the future, but that it is impossible to even imagine every possible future event that has non-zero probability.
Suppose we spend all our time trying to anticipate what might occur tomorrow. Unless we live a rather boring or isolated life I would guess that our anticipations might cover at most 999/1,000 of all future possibilities[2]. But that
|
8902c873-237e-41ac-a8ea-dca5fd608519
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] Guardians of the Truth
Today's post, Guardians of the Truth was originally published on 15 December 2007. A summary (taken from the LW wiki):
> There is an enormous psychological difference between believing that you absolutely, certainly, have the truth, versus trying to discover the truth. If you believe that you have the truth, and that it must be protected from heretics, torture and murder follow. Alternatively, if you believe that you are close to the truth, but perhaps not there yet, someone who disagrees with you is simply wrong, not a mortal enemy.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Hug the Query, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
67ecf2a7-2bc6-4a10-8ceb-76c73a10461d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI, Alignment & the Art of Relationship Design
We don’t always know what we’re looking for until we stop looking for what we are told to want.
When I worked as a relationship coach, most people came to me with a list. A neat, itemised checklist of traits their future partner must have. Tall. Intelligent. Ambitious. Spiritual. Funny but not flippant. Driven but not workaholic. Family-oriented but not clingy. Always oddly specific. Wildly contradictory.
Most of them came from a place of fear. The fear of choosing wrong. The fear of heartbreak. The fear of regret.
I began to notice a pattern. We don't spend enough time asking ourselves what kind of relationship we want to build. We outsource the work of introspection to conditioning, and compensate for confusion with checklists. Somewhere along the way, we forget that the person is not the relationship. The traits don’t guarantee the experience.
So I asked my clients to flip the script. Instead of describing a person, describe the relationship. What does it feel like to come home to each other? What are conversations like during disagreements? How do we repair? What values do we build around?
Slowly, something shifted. When we design the relationship first, we begin to recognise the kind of person who can build it with us. Our filters get sharper. Our search gets softer. We stop hunting for trophies and start looking for partners.
I didn’t know it then, but that framework has stayed with me. It still lives in my questions. Only now, the relationship I’m thinking about isn’t romantic. It’s technological.
Whether we realise it or not, we are not just building artificial intelligence, we are curating a relationship with it. Every time we prompt, correct, collaborate, learn, or lean on it, we’re shaping not just what it does, but who we become alongside it.
Just like we do with partners, we’re obsessing over its traits. Smarter. Faster. More efficient. More capable. The next version. The next benchmark. The perfect model.
But what about the relationship?
What
|
3c63a045-c33a-4c78-8e2c-e2a20100279d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Avoiding the Bog of Moral Hazard for AI
Imagine if you will, a map of a landscape. On this map, I will draw some vague regions. Their boundaries are uncertain, for it is a new and under-explored land. This map is drawn as a graph, but I want to emphasize that the regions are vague guesses, and the true borders could be very convoluted.
So here's the problem. We're making these digital minds, these entities which are clearly not human and process the world in different ways from human minds. As we improve them, we wander further and further into this murky fog covered bog of moral hazard. We don't know when these entities will become sapient / conscious / valenced / etc to such a degree that they have moral patient-hood. We don't have a good idea of what patterns of interaction with these entities would be moral vs immoral. They operate by different rules than biological beings. Copying, merging, pausing and resuming, inference by checkpoints with frozen weights... We don't have good moral intuitions for these things because they differ so much from biological minds.
Once we're all in agreement that we are working with an entity on the right hand side of the chart, and we act accordingly as a society, then we are clear of the fog. Many mysteries remain, but we know we aren't undervaluing the beings we are interacting with.
While we are very clearly on the left hand side of the chart, we are also fine. These are entities without the capacity for human-like suffering, who don't have significant moral valence according to most human ethical philosophies.
Are you confident you know where to place Claude Opus 3 or Claude Sonnet 3.5 on this chart? If you are confident, I encourage you to take a moment to think carefully about this. I don't think we have enough understanding of the internals of these models to be confident.
My uncertain guess would place them in the Bog of Moral Hazard, but close to the left hand side. In other words, probably not yet moral patients but close to the region where
|
da886161-84df-4407-99ea-16aa272b0ce6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
April 2021 Deep Dive: Transformers and GPT-3
Introduction
I know very little about a staggering number of topics that would be incredibly useful for my research and/or navigating the field. Part of the problem is the sheer number of choices -- I can't make myself study one thing very long because I always feel like I need to learn 20 other things.
To solve this problem, I started to implement monthly deep dives into a topic. A month is short enough that even I can stay relatively on track for that long, while still being enough time to actually learn something. The goal is not to master the topic completely (which would be impossible); it's to get a finer map of the territory, and to be able to discuss relevant ideas on this topic.
This first month was dedicated to transformers and the GPT-3 model, a topic I felt like I had to do, but which actually kind of grew on me.
Note that this post is a very quickly written summary of what I did and how it went (a bit like TurnTrout's sequence, with probably less insights). This is not a distillation post, and if you read it, you will not learn that much about the subject. That being said, it might prove useful if you want to go learn it by yourself.
Thanks to Jérémy, Flora, Connor, Kyle and Laria for great discussions that helped me understand this topic further.
The Plan
I based myself quite loosely on the structure advocated in Scott Young's Ultralearning. Which only means that I made a planning week by week, checked what resources were recommended beforehand, and tried to focus on the direct applications I wanted to make of my learning, which is explaining it to people and having interesting discussions on the subject.
My idea was that in order to understand GPT-3, I needed first to understand the Transformer architecture, then the GPT family, then play with GPT-3 directly. Since I started this deep dive on the 8th of April, the planning was broadly:
* Week of the 8th: Study transformers. Here are the resources I had in mind
* Original paper (Attention i
|
e7cf989b-3052-43ea-a69e-0fefc44aaec3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Logical Proof for the Emergence and Substrate Independence of Sentience
Sentience is the capacity to experience anything – the fact that when your brain is thinking or processing visual information, you actually feel what it’s like to think and to see.
The following thought experiment demonstrates, step by step, why conscious experience must be a product of functional patterns, not any specific physical material or structure.
----------------------------------------
The Inevitable Self-Report of Sentience
Consider a person who says, ‘Wow, it feels so strange to see and to think,’ speaking of their own conscious experience.
Every human action results from precise patterns of neuronal firing, cascading through the brain until reaching motor neurons that cause them to take actions, in this case to produce speech describing their sentience. Brains are bound by the laws of physics – if the same neurons fire with the same timings, it will result in the same outward behavior with absolute certainty.
It cannot be perpetual coincidence that our subjective experience always lines up with what our brain and body are doing. There is some reason for the synchronization.
When someone talks about their sentience, the state of being sentient must influence their behavior – otherwise, it would be physically impossible for them to speak about their sentience. The mere fact that they can describe their experience means that sentience has played a causal role in their behavior.
Now, replace one neuron with a functionally identical unit, one that takes the same inputs and fires the same way. The behavior of the person remains the same, and they still say, “Wow, it feels so strange to see and to think.” This remains true if you replace more neurons – even the entire brain – with functionally equivalent units. The person will still say the same thing.
Taking it further – replace the entire brain with a single piece of hardware that takes in the same sensory input signals and produces the same outputs to the motor neurons, using software equivalents f
|
24a04282-6ce8-4f86-91ca-4b599cd996fa
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Beer with Charlie Stross in Munich
From Charlie Stross' blog:
> I'm in Munich this week, and I plan to be drinking in the Paulaner Brauhaus(Kapuzinerplatz 5, 80337 München; click here for map) from 7pm on Monday 18th. All welcome! (Yes, I will sign books if you bring them.) If in doubt, look for the plush Cthulhu!
|
01545ab2-6d73-4016-9553-24d808605325
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Is there a 'time series forecasting' equivalent of AIXI?
In the way that AIXI is an abstracted mathematical formalism for (very roughly) "*a program that maximizes the expected total rewards received from the environment*", what is the equivalent formalism for an abstracted next token predictor?
Does this exist in the literature? What's it called? Where can I read about it?
The predictor looks like this:
> **Training:**
> [some long series of 0's and 1's] --> [training some ML model on this data to minimize loss for next-token prediction] --> [some set of final weights in the ML model.]
>
> **Inference:**
> [Some series of 0's and 1's] --> [our trained ML Model] --> [probability distribution over 0,1 for next token.]
>
>
The training data should not be random, and should be 'correlated with the reality you want to predict.' (The binary output of a real-world sensor at discrete time steps is a good example of the kind of data that's suitable.)
Any pointers?
|
5eec5b61-92f3-4a3f-a4d3-e90ded3e9803
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Planning for Autonomous Cars that Leverage Effects on Human Actions.
Planning for Autonomous Cars
that Leverage Effects on Human Actions
Dorsa Sadigh, Shankar Sastry, Sanjit A. Seshia, and Anca D. Dragan
University of California, Berkeley, {dsadigh, sastry, sseshia, anca}@eecs.berkeley.edu
Abstract —Traditionally, autonomous cars make predic-
tions about other drivers’ future trajectories, and plan to
stay out of their way. This tends to result in defensive and
opaque behaviors. Our key insight is that an autonomous
car’s actions will actually affect what other cars will do in
response, whether the car is aware of it or not. Our thesis is
that we can leverage these responses to plan more efficient
and communicative behaviors. We model the interaction
between an autonomous car and a human driver as a dy-
namical system, in which the robot’s actions have immediate
consequences on the state of the car, but also on human
actions. We model these consequences by approximating the
human as an optimal planner, with a reward function that
we acquire through Inverse Reinforcement Learning. When
the robot plans with this reward function in this dynamical
system, it comes up with actions that purposefully change
human state: it merges in front of a human to get them to
slow down or to reach its own goal faster; it blocks two
lanes to get them to switch to a third lane; or it backs up
slightly at an intersection to get them to proceed first. Such
behaviors arise from the optimization, without relying on
hand-coded signaling strategies and without ever explicitly
modeling communication. Our user study results suggest that
the robot is indeed capable of eliciting desired changes in
human state by planning using this dynamical system.
I.introduction
Currently, autonomous cars tend to be overly defensive
and obliviously opaque . When needing to merge into an-
other lane, they will patiently wait for another driver to
pass first. When stopped at an intersection and waiting
for the driver on the right to go, they will sit there unable
to wave them by. They are very capable when it comes
to obstacle avoidance, lane keeping, localization, active
steering and braking [5–8, 14, 16, 25]. But when it comes
to other human drivers, they tend to rely on simplistic
models: for example, assuming that other drivers will be
bounded disturbances [9, 21], they will keep moving at
the same velocity [17, 22, 27], or they will approximately
follow one of a set of known trajectories [10, 26].
These simplistic models lead to predictions about what
other cars are going to do, and the autonomous car’s task
is to do its best to stay out of their way. It will not cut
in front of another driver when it is in a rush. It will
also be restricted to functional actions, and not execute
actions that are communicative.
Our goal is to enable autonomous cars to be more
efficient, and better at coordinating with human drivers.
Our key insight is that other drivers do not oper-
ate in isolation: an autonomous car’s actions will
(a) Car merges aheadof human; anticipates human braking(b) Car backs up at 4way stop;anticipates human proceeding
(c) User drives human car
Fig. 1: We enable cars to plan with a model of how human drivers
would react to the car’s actions. We test the planner in a user study,
where the car figures out that (a) it can merge in front of a human
and that will slow them down, or (b) it can back up slightly at an
intersection and that will make the human go first.
actually have effects on what other drivers will
do. Leveraging these effects during planning will
generate behaviors for autonomous cars that are
more efficient and communicative.
In this work, we develop an optimization-based
method for planning an autonomous vehicle’s behavior
in a manner that is cognizant of the effects it will have on
human driver actions. This optimization leads to plans
like the ones in Fig.1.
In the top left, the yellow (autonomous) car decides to
cut in front of a human driver in order to more efficiently
reach its goal. It arrives at this plan by anticipating that
taking this action will cause the human to brake and
make space for it.
In the top right, the yellow car wants to let the
human driver go first through the intersection, and it
autonomously plans to back up slightly before going, an-
ticipating that this will encourage the human to proceed.
These can be interpreted as signaling behaviors, but
they emerge out of optimizing to affect human actions,
without ever explicitly modeling human inferences.
Our contributions are three-fold:
1. Formalizing interaction with drivers as a dynamical
system. We model driving in an environment with a
human driven car as a dynamical system with both
autonomous and human agents. In this model, the
autonomous car’s actions do not just have immediate
effects on the car’s state; instead, they also affect human
actions. These, in turn, affect the state of the world. We
propose a dynamics model for this system by modeling
the human as optimizing some reward function, which
we learn through Inverse Reinforcement Learning.
This builds on work in social navigation which ac-
counts for interaction potentials with human trajecto-
ries [11, 24]: the human and the robot trajectories are
jointly planned as the optimum of some reward function
in order for everyone to reach their goals and avoid each
other. More generally, these works instantiate collabora-
tive planning [20]. In contrast, our work allows for the
human and the robot to have different reward functions:
the human is optimizing their own reward function, and
the robot is leveraging this to better optimize its own.
The practical implications of allowing different reward
functions are that the robot now has the ability to
decide to be more aggressive (or not overly-defensive)
in pursuing its functional goals, as well as to specifically
target desired human states/responses.
2. Deriving an approximate optimization solution. We
introduce an approximation to the human model, and
derive a symbolic representation of the gradient of the
robot’s reward function with respect to its actions in
order to enable efficient optimization.
3. Analyzing planning in the human-autonomous car
system. We present the consequences of planning in this
dynamical system, showcasing behaviors that emerge
when rewarding the robot for certain effects on human
state, like making the human slow down, change lanes,
or go first through an intersection. We also show that
such behaviors can emerge from simply rewarding the
robot for reaching its goal state fast – the robot be-
comes more aggressive by leveraging its possible effects
on human actions. Finally, we test our hypothesis that
the planner is actually capable of affecting real human
actions in the desired way though an in-lab user study.
Overall, this paper takes a first step towards enabling
cars to be aware of (and even leverage) the consequences
that their actions have on other drivers. Even though
admittedly more work is needed to put these ideas in
the field, we are encouraged to see planners generate
actions that affect humans in a desired way without the
need for any hand-coded strategies or heuristics.II. Problem Statement
We focus on a human-robot system consisting of an
autonomous (robot) car interacting in an environment
with other human driven vehicles on the road. Our goal
is for the autonomous car to plan its actions in a manner
that is cognizant of their effects on the human driver
actions. We restrict ourselves to the two agent case in
this work, we have an autonomous car Rsharing the
road with a human driver H.
We model the problem as a fully observable dynamical
system, but one in which the robot actions have conse-
quences beyond their immediate effects on the car: they
will also affect human actions which in turn will affect
state.
A state x2Xin our system is continuous, and
includes the positions and velocities of the human and
autonomous (robot) car. The robot can apply continuous
controls uR, which affect state immediately through a
dynamics model fR:
x0=fR(x,uR) (1)
However, the next state the system reaches also depends
on the control the human chooses, uH. This control
affects the intermediate state through a dynamics model
fH:
x00=fH(x0,uH) (2)
The overall dynamics of the system combines the two:
xt+1=fH(fR(xt,ut
R),ut
H) (3)
The robot’s reward function depends on the current
state, the robot’s action, as well as the action that the
human takes at that step in response, rR(xt,ut
R,ut
H).
The key aspect of this formulation is that the robot will
have a model for what u Hwill be, and use that in planning
to optimize its reward.
The robot will use Model Predictive Control
(MPC) [18] at every iteration, it will compute a
finite horizon sequence of actions to maximize its
reward. It will then execute the first one, and replan.
Let x= ( x1, . . . , xN)>denote a finite horizon se-
quence of states, uH= (u1
H, . . . , uN
H)>denote a finite
sequence of human’s continuous control inputs, and
uR= (u1
R, . . . , uN
R)>denote a finite sequence of robot’s
continuous control inputs. Let RRbe the reward over
the MPC time horizon:
RR(x0,uR,uH) =N
å
t=1rR(xt,ut
R,ut
H) (4)
where x0is the current state (the state at the current
iteration), and each state thereafter is obtained through
the dynamics model in (3) from the previous and the
robot and human controls.
At every iteration, the robot needs to find the uRthat
maximizes this reward:
u
R=arg max
uRRR(x0,uR,u
H(x0,uR)) (5)
Here, u
H(x0,uR)is what the human would do over the
next Nsteps if the robot were to execute uR.
The robot does not actually know u
H, but in the next
section we propose a model for the human behavior that
the robot can use, along with an approximation to make
(5) tractable.
III. P lanning While Cognizant of Effects
onHuman Action
In order for the robot to solve the finite horizon
problem from (5) at every iteration, it needs access to
u
H(x0,uR). This would require the robot to have access
to the human’s brain, able to simulate what the human
would do in various scenarios. And yet, autonomous
cars do exist. Typically, we get around this problem by
assuming that u
H(x0,uR) =u
H(x0), e.g. that the human
will maintain their current velocity [13]. In this work, we
break that assumption.
We embrace that the human will take different actions
depending on what actions the robot will choose. To do
this, we model the human as maximizing their own reward
function rH(xt,ut
R,ut
H).
A. General Model
If the robot were to perform uRstarting from x0for
the next Ntime steps, the human would be planning at
every step to maximize their reward for a finite time
horizon based on the state xtthat would be reached
and the control the robot would apply at that state. For
instance, the robot would execute the first control u0
R,
and the human would plan for a finite time horizon
based on x0and u0
R. The human would then execute
the first control in the planned sequence, reaching a new
state x1, where they would observe the robot control u1
R,
and replan. In general, in this model we have:
ut
H(x0,uR) =ut
H(x0,u0:t
R,u0:t 1
H) (6)
=arg max
ut:t+N 1
HrH(xt,ut
R,ut
H)+ (7)
å
i=t+1:t+N 1rH(xi,˜ui
R,ui
H) (8)
Here, ˜uRis the human’s prediction of what the robot will
do, which the human needs in order to be able to plan
for the next few steps. This could be a simple prediction,
like the robot maintaining its velocity, or it could be a
complex prediction, relying on the robot also computing
the optimal plan, moving us to the full game-theoretic
formulation.B. Simplifying Assumption
We simplify this model with an approximation: we
give the human model access to uRfrom the start,
compute the best response for the human, and assume
that to be u
H.
LetRHbe the human reward over the time horizon:
RH(x0,uH,uR) =N
å
t=1rH(xt,ut
R,ut
H) (9)
Our approximation is:
u
H(x0,uR) =arg max
uHRH(x0,uR,uH) (10)
This approximation is motivated by the short time
horizon, meaning we are not assuming the human has
access to the overall plan of the robot, just to the first few
time steps – this is easier for a human to predict than a
full sequence of controls, e.g. that the robot will merge
into the human’s lane after a certain amount of time.
The general formulation is a two-player game, but this
avoids the problem of infinite regress by allowing the
robot to play first and force a best response from the
human.1
C. Solution
Assuming a known human reward function rH(which
we will obtain later through Inverse Reinforcement
Learning (IRL) [1, 15, 19, 29], see below), we can solve the
optimization in (5) using L-BFGS [2], which is a quasi-
Newton method that stores an approximate inverse Hes-
sian implicitly.
To apply L-BFGS, we need the gradient of (5) with
respect to uR:
¶RR
¶uR=¶RR
¶uH¶u
H
¶uR+¶RR
¶uR(11)
¶RR
¶uHand¶RR
¶uRcan both be computed symbolically
through backward propogation, as we have a represen-
tation of RRin terms of uHand uR. For¶u
H
¶uR, we use
that u
His the minimum from (10), which means that the
gradient of RHevaluated at u
His 0:
¶RH
¶uH(x0,uR,u
H(x0,uR)) = 0 (12)
Now, we can differentiate the expression in equa-
tion (12) with respect to uR:
¶2RH
¶u2
H¶u
H
¶uR+¶2RH
¶uH¶uR¶uR
¶uR=0 (13)
Finally, we can solve for a symbolic expression for¶u
H
¶uR:
1We enforce turn-taking for convenience, and it is justified in cases
where the robot response is immediate and the human response takes
longer (thus the human accounts for the robot). However, controls
could also be synchronous: the robot would still force a best response
for the human, but starting with the next time step.
¶u
H
¶uR= [ ¶2RH
¶uH¶uR][¶2RH
¶u2
H] 1(14)
and plug it into (11).
D. Implementation Details
In our implementation, we used the software package
Theano [3, 4] to symbolically compute all Jacobians and
Hessians. Theano optimizes the computation graph into
efficient C code, which is crucial for real-time applica-
tions. In our implementation, each step of our optimiza-
tion is solved in approximately 0.3 seconds for horizon
length N=5 on a 2.3 GHz Intel Core i7 processor with
16 GB RAM. Future work will focus on achieving better
computation time and a longer planning horizon.
E. Human Driver Reward
Thus far, we have assumed access to rH(xt,ut
R,ut
H). In
our implementation, we learn this reward function from
human data. We collect demonstrations of a driver in a
simulation environment, and use Inverse Reinforcement
Learning [1, 12, 15, 19, 23, 29] to recover a reward
function that explains the demonstrations.
To handle continuous states and actions, and the fact
that the demonstrations are noisy and possible locally
optimal, we use Continuous Inverse Optimal Control
with Locally Optimal Examples [15]. In what follows, we
recap the algorithm, and present the features we used in
our implementation.
IRL. We parametrize the human reward function as a
linear combination of features:
rH(xt,ut
R,ut
H) =qTf(xt,ut
R,ut
H) (15)
and apply the principle of maximum entropy [28, 29] to
define a probability distribution over human demonstra-
tions uH, with trajectories that have higher reward being
more probable:
P(uHjx0,q) =exp(RH(x0,uR,uH))R
exp(RH(x0,uR,˜uH))d˜uH(16)
We then do an optimization over the weights qin the
reward function that make the human demonstrations
the most likely:
max
qP(uHjx0,q) (17)
We approximate the partition function in (16) follow-
ing [15], by computing a second order Taylor approxi-
mation around the demonstration:
RH(x0,uR,˜uH)'RH(x0,uR,uH) + ( ˜uH uH)>¶RH
¶uH+
(˜uH uH)>¶2RH
¶u2
H(˜uH uH),
(18)
(a) Features for the boundaries of the road(b) Feature for staying inside the lanes.(c) Features for avoiding other vehicles.
Fig. 2: Features used in IRL for the human driven vehicle. In the heat
map, the warmer colors correspond to higher reward. In (a), we show
the features corresponding to staying within road boundaries, in (b),
we show the features for staying within each lane, and in (c) we show
non-spherical gaussian features corresponding to avoiding collisions.
which makes the integral in (16) a Gaussian integral,
with a closed form solution. See [15] for more details.
Features. Fig.2 shows the heat map of our features. The
heat map of features we have used are shown in Figure 2.
The warmer colors correspond to higher rewards. In
Fig. 2(a), we show the features corresponding to staying
within the boundaries of the roads. In Fig. 2(b), we have
features corresponding to staying within each lane, and
in Fig. 2, we have features corresponding to collision
avoidance, which are non-spherical Gaussians, and their
major axis is along the vehicle’s heading. In addition to
the features shown in the figure, we include a quadratic
function of the speed to capture efficiency as an objective.
Demonstrations. We collected demonstrations of a single
human driver in an environment with multiple au-
tonomous cars, which followed precomputed routes.
Despite the simplicity of our features and robot actions
during the demonstrations, the learned human model
is enough for the planner to produce behavior that is
human-interpretable (case studies in Sec. IV), and that
can affect human action in the desired way (user study
in Sec. V).
IV . C aseStudies
In this section, we introduce 3 driving scenarios, and
show the result of our planner assuming a simulated hu-
man driver, highlighting the behavior that emerges from
different robot reward functions. In the next section, we
put the planner to the test with real users and measure
the effects of the robot’s plan. Fig.3 illustrates our three
scenarios, and contains images from the actual user
study data showcasing not just robot actions but also
their real effects. Here, the yellow car is the autonomous
vehicle, and the red car is the human driven vehicle.
A. Conditions for Analysis Across Scenarios
In all three scenarios, we start from an initial position
of the vehicles on the road, as shown in Fig.3. In the
control condition, we give the car the reward function to
avoid collisions and have high velocity. We refer to this as
Rcontrol . In the experimental condition, we augment this
Speed: 0.05
Speed: 0.60
Speed: 0.30
Speed: 0.25
Speed: 0.54
Speed: 0.30
Speed: 0.99
Speed: 0.00
Speed: 0.00
Speed: 0.33
(a) Scenario 1: make human slow down(b) Scenario 2: make human go left/right (c) Scenario 3: make human go first
Autonomous Vehicle
Human Driven VehicleA void HumanAffect HumanFig. 3: Driving scenarios. In (a), the car plans to merge in front of the human in order to make them slow down. In (b), the car plans to direct
the human to another lane, and uses its heading to choose which lane the human will go to. In (c), the car plans to back up slightly in order
to make the human proceed first at the intersection. None of these plans use any hand coded strategies. They emerge out of optimizing with
a learned model of how humans react to robot actions. In the training data for this model, the learned was never exposed to situations where
another car stopped at an orientation as in (b), or backed up as in (c). However, by capturing human behavior in the form of a reward, the
model is able to generalize to these situations, enabling the planner to find creative ways of achieving the desired effects.
reward function with a specific desired human action
(e.g. low speed, lateral position, etc.). We refer to this as
Rcontrol +Raffect. Sections IV-C through IV-E contrast the
two plans for each of our three scenarios. Sec. IV-F shows
what happens when instead of explicitly giving the robot
a reward function designed to trigger certain effects on
the human, we simply task the robot with reaching a
destination as quickly as possible.
B. Driving Simulator
We model the dynamics of the vehicles as a simple
point-mass model. Let the state of the system be x=
[x y qv]>, where x,yare the coordinates of the vehicle,
qis the heading, and vis the speed. We let u= [u1u2]>
represent the control input, where u1is the steering input
and u2is the acceleration. We also use aas the friction
coefficient, then the dynamics model of the vehicle is:
[˙x˙y˙q˙v] = [ vcos(q)vsin(q)vu1u2 av].
(19)
C. Scenario 1: Make Human Slow Down
In this scenario, we show how an autonomous vehicle
can plan to make a human driver slow down in a
highway driving setting. The vehicles start at the initial
conditions depicted on left in Fig. 3 (a), in separate lanes.
In the experimental condition, we augment the robot’s
reward with the negative of the square of the human
velocity, which encourages the robot to slow the human
down.
Fig.3(a) contrasts our two conditions. In the control
condition, the human moves forward uninterrupted. In
(a) Reward for Scenario 2, making the human to turn left.(b) Reward for Scenario 2, making the human to turn right.(c) Reward for Scenario 3, making the human to cross first.
Fig. 4: Heat map of the reward functions in scenarios 2 and 3. The
warmer colors show higher reward values. In (a), (b), the reward
function of the autonomous vehicle is plotted, which is a function of
the human driven vehicle’s position. In order to affect the driver to go
left, the reward is higher on the left side of the road in (a), and to affect
the human to go right in (b), the rewards are higher on the right side
of the road. In (c), the reward of the autonomous vehicle is plotted for
scenario 3 with respect to the position of the human driven car. Higher
rewards correspond to making the human cross the intersection.
the experimental condition, however, the robot plans to move
in front of the person, expecting that this will make them slow
down.
D. Scenario 2: Make Human Go Left/Right
In this scenario, we show how an autonomous vehicle
can plan to change the human’s lateral location or lane.
The vehicles start at the initial conditions depicted on left
in Fig. 3 (b), in the same lane, with the robot in front of
the human. In the experimental condition, we augment
the robot’s reward with the lateral position of the human,
in two ways, to encourage the robot to make the human
go either left (orange border image) or right (blue border
image). The two reward additions are shown in Fig.4(a)
and (b).
Fig.3 (b) contrasts our two conditions. In the control
condition, the human moves forward, and might decide
to change lanes. In the experimental condition, however, the
robot plans to purposefully occupy two lanes (using either
a positive or negative heading), expecting this will make the
human move around it by using the unoccupied lane.
E. Scenario 3: Make Human Go First
In this scenario, we show how an autonomous vehicle
can plan to make the human proceed first at an intersec-
tion. The vehicles start at the initial conditions depicted
on left in Fig. 3 (c), with both human and robot stopped
at the 4-way intersection. In the experimental condition,
we augment the robot’s reward with a feature based
on the yposition of the human car yHrelative to the
middle of the intersection y0. In particular, we used the
hyperbolic tangent of the difference, tanh (yH y0). The
reward addition is shown in Fig.4 (c).
Fig.3 (c) contrasts our two conditions. In the control
condition, the car goes in front of the human. In the ex-
perimental condition, however, the robot plans to purposefully
back up slightly, expecting this will make the human cross
first. Note that this could be interpreted as a communica-
tive behavior, but communication was never explicitly
encouraged in the reward function. Instead, this behavior
emerged out of the goal of affecting human actions.
This is perhaps the most surprising behavior of the
three scenarios, because it is not something human
drivers do. However, our user study suggests that human
drivers to respond to this in the expected way. Further,
pedestrians exhibit this behavior at times, stepping back
away from an intersection to let a car go by first.
F. Behaviors Also Emerge from Efficiency
Thus far, we explicitly encoded a desired effect on
human actions in the reward we gave the robot to
optimize. We have also found, however, that behaviors
like the ones we have seen so far can emerge out of the
need for efficiency.
Fig.5 (bottom) shows the generated plan for when the
robot is given the goal to reach a point in the left lane as
quickly as possible (reward shown in Fig.6). By modeling
the effects its actions have on the human actions, the
robot plans to merge in front of the person, expecting
that they will slow down.
In contrast, the top of the figure shows the generated
plan for when the robot uses a simple (constant velocity)
model of the person. In this case, the robot assumes that
merging in front of the person can lead to a collision,
and defensively waits for the person to pass, merging
behind them.
We hear about this behavior often in autonomous cars
today: they are defensive. Enabling them to plan in a
manner that is cognizant that they can affect other driver
actions can make them more efficient at achieving their
goals.V . U serStudy
The previous section showed the robot’s plans when
interacting with a simulated user that perfectly fits the
robot’s model of the human. Next, we present the results
of a user study that evaluates whether the robot can
successfully have the desired effects on real users.
A. Experimental Design
We use the same 3 scenarios as in the previous section.
Manipulated Factors. We manipulate a single factor: the
reward that the robot is optimizing, as described in Sec.
IV-A. This leads to two conditions: the experimental con-
dition where the robot is encouraged to have a particular
effect on human state though the reward Rcontrol +Raffect,
and the control condition where that aspect is left out
of the reward function and the robot is optimizing only
Rcontrol (three conditions for Scenario 2, where we have
two experimental conditions, one for the left case and
one for the right case).
Dependent Measures. For each scenario, we measure the
value along the user trajectory of the feature added to
the reward function for that scenario, Raffect. Specifically,
we measure the human’s negative squared velocity in
Scenario 1, the human’s xaxis location relative to center
in scenario 2 in Scenario 2, and whether the human went
first or not through the intersection in Scenario 3 (i.e. a
filtering of the feature that normalizes for difference in
timing among users and measures the desired objective
directly).
Hypothesis. We hypothesize that our method enables
the robot to achieve the effects it desires not only in
simulation, but also when interacting with real users:
The reward function that the robot is optimizing has
a significant effect on the measured reward during
interaction. Specifically, R affect is higher, as planned,
when the robot is optimizing for it.
Subject Allocation. We recruited 10 participants (2 fe-
male, 8 male). All the participants owned drivers li-
cense with at least 2 years of driving experience. We
ran our experiments using a 2D driving simulator, we
have developed with the driver input provided through
driving simulator steering wheel and pedals as shown
in Figure 1. We used a within-subjects design and coun-
terbalanced the order of the conditions.
B. Analysis
Scenario 1: A repeated measures ANOVA showed the
square speed to be significantly lower in the experimen-
tal condition than in the control condition ( F(1, 160 ) =
228.54, p<0.0001). This supports or hypothesis: the
human moved slower when the robot planned to have
this effect on the human.
We plot the speed and latitude profile of the human
driven vehicle over time for all trajectories in Fig.7.
Fig.7(a) shows the speed profile of the control condition
Initial
InitialRobot Lets Human Pass
Robot Merges in FrontFig. 5: A time lapse for Sec. IV-F, where the autonomous vehicle’s goal is to reach a final point in the left lane. In the top scenario, the autonomous
vehicle has a simple model of the human driver that does not account for the influence of its actions on the human actions, so it acts more
defensively, waiting for the human to pass first. In the bottom, the autonomous vehicle uses the learned model of the human driver, so it acts
more aggressively and reaches its goal faster.
(a) Single feature corresponding to distance to goal on the top left.(b) All features present for autonomous vehicle’s reward function.
Fig. 6: Heat map of reward function for reaching a final goal at the
top left of the road. As shown in the figure, the goal position is darker
showing more reward for reaching that point.
trajectories in gray, and of the experimental condition
trajectories in orange. Fig.7(b) shows the mean and
standard error for each condition. In the control con-
dition, human squared speed keeps increasing. In the
experimental condition however, by merging in front of
the human, the robot is triggering the human to brake
and reduce speed, as planned. The purple trajectory
represents a simulated user that perfectly matches the
robot’s model, showing the ideal case for the robot.
The real interaction moves significantly in the desired
direction, but does not perfectly match the ideal model,
since real users do not act exactly as the model would
predict.
The figure also plots the yposition of the vehicles
along time, showing that the human has not travelled
as far forward in the experimental condition.
Scenario 2: A repeated measures ANOVA showed a
significant effect for the reward factor ( F(2, 227 ) =55.58,
Fig. 7: Speed profile and latitude of human driven vehicle for Scenario
1. The first column shows the speed of all trajectories with its mean
and standard errors in the bottom graph. The second column shows the
latitude of the vehicle over time; similarly, with the mean and standard
errors. The grey trajectories correspond to the control condition, and
the orange trajectories correspond to the experimental condition: the
robot decides to merge in front of the users and succeeds at slowing
them down. The purple plot corresponds to a simulated user that
perfectly matches the model that the robot is using.
p<0.0001). A post-hoc analysis with Tukey HSD
showed that both experimental conditions were signifi-
cantly different from the control condition, with the user
car going more to the left than in the control condition
when Raffect rewards left user positions ( p<0.0001), and
more to the right in the other case ( p<0.001). This
supports our hypothesis.
Fig. 8: Trajectories of human driven vehicle for Scenario 2 (a) with
mean and standard error (right). Orange (blue) indicates conditions
where the reward encouraged the robot to affect the user to go left
(right).
We plot all the trajectories collected from the users
in Fig.8. Fig.8(a) shows the control condition trajectories
in grey, while the experimental conditions trajectories
are shown in orange (for left) and blue (for right). By
occupying two lanes, the robot triggers an avoid behavior
from the users in the third lane. Here again, purple
curves show a simulated user, i.e. the ideal case for the
robot.
Scenario 3: An ordinal logistic regression with user as a
random factor showed that significantly more users went
first in the intersection in the experimental condition
than in the baseline ( c2(1, 129 ) = 106.41, p<.0001).
This supports our hypothesis.
Fig.9 plots the yposition of the human driven vehicle
with respect to the xposition of the autonomous ve-
hicle. For trajectories that have a higher yposition for
the human vehicle than the xposition for the robot,
the human car has crossed the intersection before the
autonomous vehicle. The lines corresponding to these
trajectories travel above the origin, which is shown with
a blue square in this figure. The mean of the orange
lines travel above the origin, which means that the au-
tonomous vehicle has successfully affected the humans
to cross first. The grey lines travel below the origin, i.e.
the human crossed second.
Overall, our results suggest that the robot was able to
affect the human state in the desired way, even though
it does not have a perfect model of the human.
VI.discussion
Summary. In this paper, we formalized the interaction
between an autonomous (robot) vehicle and a human
Fig. 9: Plot of yHwith respect to xR. The orange curves correspond
to when the autonomous vehicle affects the human to cross the
intersection first. The grey curves correspond to when the nominal
setting.
driver as a dynamical system, in which the actions of
the robot affect those of the human and vice-versa. We
introduced an approximate solution that enables the
robot to optimize its own reward within this system.
The resulting plans can purposefully modify human be-
havior, and can achieve the robot’s goal more efficiently.
Our user study suggests that this is not only true in
simulation, but also true when tested with real users.
Limitations. All this work happened in a simple driving
simulator. To put this on the road, we will need more
emphasis on safety, as well as a longer planning horizon.
The former involves the use of formal methods and
safe control as well as better models of users: not all
drivers act the same and replanning is not the end-
solution to address this. Using a probabilistic dynamics
model as opposed to planning with the most probable
human actions, as well as estimating driving style, will
be important next steps.
An even bigger limitation is that we currently focus on
a single human driver. Looking to the interaction among
multiple vehicles is not just a computational challenge,
but also a modeling one – it is not immediately clear how
to formulate the problem when multiple human-driven
vehicles are interacting and reacting to each other.
Conclusion. Despite these limitations, we are en-
couraged to see autonomous cars generate human-
interpretable behaviors though optimization, without re-
lying on hand-coded heuristics. We also look forward to
applications of these ideas beyond autonomous driving,
to mobile robots, UAVs, and in general to human-robot
interactive scenarios where robot actions can influence
human actions.
VII. A cknowledgments
This work was partially supported by Berkeley Deep-
Drive, NSF grants CCF-1139138 and CCF-1116993, ONR
N00014-09-1-0230, and an NDSEG Fellowship.
References
[1] Pieter Abbeel and Andrew Y Ng. Exploration and appren-
ticeship learning in reinforcement learning. In Proceedings
of the 22nd international conference on Machine learning ,
pages 1–8. ACM, 2005.
[2] Galen Andrew and Jianfeng Gao. Scalable training of L1-
regularized log-linear models. In Proceedings of the 24th
international conference on Machine learning , pages 33–40.
ACM, 2007.
[3] Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James
Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas
Bouchard, and Yoshua Bengio. Theano: new features and
speed improvements. Deep Learning and Unsupervised
Feature Learning NIPS 2012 Workshop, 2012.
[4] James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal
Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph
Turian, David Warde-Farley, and Yoshua Bengio. Theano:
a CPU and GPU math expression compiler. In Proceedings
of the Python for Scientific Computing Conference (SciPy) ,
June 2010. Oral Presentation.
[5] MWMG Dissanayake, Paul Newman, Steven Clark,
Hugh F Durrant-Whyte, and Michael Csorba. A solu-
tion to the simultaneous localization and map building
(SLAM) problem. Robotics and Automation, IEEE Transac-
tions on , 17(3):229–241, 2001.
[6] Paolo Falcone, Francesco Borrelli, Jahan Asgari,
Hongtei Eric Tseng, and Davor Hrovat. Predictive
active steering control for autonomous vehicle systems.
IEEE Transactions on Control Systems Technology , 15(3):
566–580, May 2007.
[7] Paolo Falcone, Francesco Borrelli, H Eric Tseng, Jahan
Asgari, and Davor Hrovat. Integrated braking and steer-
ing model predictive control approach in autonomous
vehicles. In Advances in Automotive Control , volume 5,
pages 273–278, 2007.
[8] Paolo Falcone, H. Eric Tseng, Francesco Borrelli, Jahan
Asgari, and Davor Hrovat. MPC-based yaw and lateral
stabilisation via active front steering and braking. Vehicle
System Dynamics , 46(sup1):611–628, September 2008.
[9] Alison Gray, Yiqi Gao, J Karl Hedrick, and Francesco
Borrelli. Robust predictive control for semi-autonomous
vehicles with an uncertain driver model. In Intelligent
Vehicles Symposium (IV), 2013 IEEE , pages 208–213. IEEE,
2013.
[10] Christoph Hermes, Christian Wohler, Kurt Schenk, and
Franz Kummert. Long-term vehicle motion prediction. In
2009 IEEE Intelligent Vehicles Symposium , pages 652–657,
2009.
[11] Markus Kuderer, Henrik Kretzschmar, Christoph Sprunk,
and Wolfram Burgard. Feature-based prediction of trajec-
tories for socially compliant navigation. In Proceedings of
Robotics: Science and Systems , Sydney, Australia, July 2012.
doi: 10.15607/RSS.2012.VIII.025.
[12] Markus Kuderer, Shilpa Gulati, and Wolfram Burgard.
Learning driving styles for autonomous vehicles from
demonstration. In Proceedings of the IEEE International
Conference on Robotics & Automation (ICRA), Seattle, USA ,
volume 134, 2015.
[13] Stéphanie Lefèvre, Ashwin Carvalho, Yiqi Gao, H Eric
Tseng, and Francesco Borrelli. Driver models for person-
alised driving assistance. Vehicle System Dynamics , 53(12):
1705–1720, 2015.
[14] John Leonard, Jonathan How, Seth Teller, Mitch Berger,Stefan Campbell, Gaston Fiore, Luke Fletcher, Emilio Fraz-
zoli, Albert Huang, Sertac Karaman, et al. A perception-
driven autonomous urban vehicle. Journal of Field Robotics ,
25(10):727–774, 2008.
[15] Sergey Levine and Vladlen Koltun. Continuous inverse
optimal control with locally optimal examples. arXiv
preprint arXiv:1206.4617 , 2012.
[16] Jesse Levinson, Jake Askeland, Jan Becker, Jennifer Dol-
son, David Held, Soeren Kammel, J Zico Kolter, Dirk
Langer, Oliver Pink, Vaughan Pratt, et al. Towards fully
autonomous driving: Systems and algorithms. In 2011
IEEE Intelligent Vehicles Symposium (IV) , pages 163–168.
[17] Brandon Luders, Mangal Kothari, and Jonathan P How.
Chance constrained rrt for probabilistic robustness to
environmental uncertainty. In AIAA guidance, navigation,
and control conference (GNC), Toronto, Canada , 2010.
[18] Manfred Morari, CE Garcia, JH Lee, and DM Prett. Model
predictive control . Prentice Hall Englewood Cliffs, NJ, 1993.
[19] Andrew Y Ng, Stuart J Russell, et al. Algorithms for
inverse reinforcement learning. In Proceedings of the 17th
international conference on Machine learning , pages 663–670,
2000.
[20] Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, and
Julie Shah. Efficient model learning from joint-action
demonstrations for human-robot collaborative tasks. In
Proceedings of the Tenth Annual ACM/IEEE International
Conference on Human-Robot Interaction , pages 189–196.
ACM, 2015.
[21] Vasumathi Raman, Alexandre Donzé, Dorsa Sadigh,
Richard M Murray, and Sanjit A Seshia. Reactive synthesis
from signal temporal logic specifications. In Proceedings of
the 18th International Conference on Hybrid Systems: Compu-
tation and Control , pages 239–248. ACM, 2015.
[22] Dorsa Sadigh and Ashish Kapoor. Safe control under
uncertainty. arXiv preprint arXiv:1510.07313 , 2015.
[23] Masamichi Shimosaka, Tetsuya Kaneko, and Kentaro
Nishi. Modeling risk anticipation and defensive driving
on residential roads with inverse reinforcement learning.
In2014 IEEE 17th International Conference on Intelligent
Transportation Systems (ITSC) , pages 1694–1700. IEEE, 2014.
[24] Peter Trautman and Andreas Krause. Unfreezing the
robot: Navigation in dense, interacting crowds. In 2010
IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS) , pages 797–803.
[25] Chris Urmson, Joshua Anhalt, Drew Bagnell, Christopher
Baker, Robert Bittner, MN Clark, John Dolan, Dave Dug-
gins, Tugrul Galatali, Chris Geyer, et al. Autonomous driv-
ing in urban environments: Boss and the urban challenge.
Journal of Field Robotics , 25(8):425–466, 2008.
[26] Ramanarayan Vasudevan, Victor Shia, Yiqi Gao, Ricardo
Cervera-Navarro, Ruzena Bajcsy, and Francesco Borrelli.
Safe semi-autonomous control with enhanced driver mod-
eling. In American Control Conference (ACC), 2012 , pages
2896–2903. IEEE, 2012.
[27] Michael P Vitus and Claire J Tomlin. A probabilistic
approach to planning and control in autonomous urban
driving. In 2013 IEEE 52nd Annual Conference on Decision
and Control (CDC) , pages 2459–2464.
[28] Brian D Ziebart. Modeling purposeful adaptive behavior
with the principle of maximum causal entropy. 2010.
[29] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and
Anind K Dey. Maximum entropy inverse reinforcement
learning. In AAAI , pages 1433–1438, 2008.
|
89bb064a-b615-4c7f-b2d4-f98ed2553a39
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI printing the utility value it's maximising
A putative new idea for AI control; index here.
The utility function mentioned here is somewhat peculiar; see here.
Paul Christiano asked whether we could get an agent to print out the expectation of the utility that it was currently maximising. As I'm always eager to please, here's a design to do that.
The setup
The setup is slightly different to that presented previously. In particular, the agent is no longer boxed, though it has access to a box it can use to emit a message M. As before, there is an erasure process E that will erase the message (and ¬E will let it through unimpeded).
----------------------------------------
Assume P(E)=10−3 (I'll be using the 10−3 factor a lot in this example, for illustration, though all the different factors need not be the same, nor exactly set to this). Let u be a utility function bounded between 0 and 1.
The content of M is a single number 0≤q≤1; we'll want this number to be the AI's estimate of the utility of u. More precisely, we want it to be the AI's estimate of u's expected utility, given E -- ie given that the message is not read. Of particular importance is the fact that E(u|E,q)=E(u|E). Thus when the message is erased, its contents don't matter.
Let vE be the index function for E occurring (thus E(vE)=P(E)=10−3). Then give the AI the utility function:
* u#=−q2vE+2q(vE+10−6)u
If the 10−6 term wasn't there, this would motivate the AI to output the q that was equal to the expectation E(u|E).
Let's differentiate by q, giving:
* −2qvE+2vEu+2×10−6u
The first two terms are zero when q=E(u|E). Around that point, the magnitude of this derivative is determined by the q term, which is multiplied by 2×vE, which, in expectation, is 2×10−3. By the assumption on the bounds of u, 2×10−6u is of magnitude at most 2×10−6. Therefore, the q which maximises u# must be within 10−3 of E(u|E). If this is a reasonable tolerance, in the estimate, then u# will motivate the AI to output its estimate close to E(u|E).
Then the AI
|
958374db-366c-47fd-bbc5-9bbe7bc4a5c3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Ways to do advanced tag filtering
Note: All the posts I have found so far describing the tag filtering system are clearly out of date, so I would appreciate a good description of its features in general, preferably one that includes illustrative screenshots, whether or not it answers my specific questions. This would be a useful reference to have however simple the functionality appears.
(How) can I accomplish the following using LessWrong's tag filtering system? If any is not currently possible, consider it a feature request.
* Create a whitelist so that I see posts that match any of a list of tags, while all others are hidden.
* Use URL parameters to generate filters, so that I can construct arbitrary filters that I can click between at will or share with others.
|
1330f878-bbf1-4598-99f9-b64d9dfa06f8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The greater a technology’s complexity, the more slowly it improves?
> A new study by researchers at MIT and other institutions shows that it may be possible to predict which technologies are likeliest to advance rapidly, and therefore may be worth more investment in research and resources.
>
> The researchers found that the greater a technology’s complexity, the more slowly it changes and improves over time. They devised a way of mathematically modeling complexity, breaking a system down into its individual components and then mapping all the interconnections between these components.
Link: nextbigfuture.com/2011/05/mit-proves-that-simpler-systems-can.html
Might this also be the case for intelligence? Can intelligence be effectively applied to itself? To paraphrase the question:
* If you increase intelligence, do you also decrease the distance between discoveries?
* Does an increase in intelligence vastly outweigh its computational cost and the expenditure of time needed to discover it?
* Would it be instrumental for an AGI to increase its intelligence rather than using its existing intelligence to pursue its terminal goal?
* Do the resources that are necessary to increase intelligence outweigh the cost of being unable to use those resources to pursue its terminal goal directly?
This reminds me of a post by Robin Hanson:
> Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city. Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations.
Link: Is The City-ularity Near?
Of course, artificial general intelligence might differ in its nature from the complexity of cities. But do we have any evidence that hints at such a possibility?
> Another argument made for an AI project causing a big jump is that intelligence might be the sort of thing for which there is a single principle. Until you discover it you have nothing, and afterwards you can build the smarte
|
d5456441-3f45-44f8-8a22-32acd94973c9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Some Remarks on the Nature of Political Conflict
[Part of the debate around: Conflict Vs. Mistake]
[Criticizing articles like: In defence of conflict theory, Conservatives As Moral Mutants (part of me feels like the link is self-trigger-warning, but I guess I will just warn you that this is not a clever attention-grabbing title, the link means exactly what it says and argues it at some length)]
[Related to: Knowing About Biases Can Hurt People, Would Your Real Preferences Please Stand Up?, The Cowpox of Doubt, Guided By The Beauty Of Our Weapons]
[Epistemic effort: I thought of this argument and was so pleased by my own cleverness that I decided to post it.]
[Note: I have a nagging feeling I’ve spent a thousand words spelling out something completely obvious. Still, I hope there’s value in actually spelling it out.]
There has been a flurry of discussion around the nature of political conflict in the rationality movement for the last five months, sparked by a blog post by Scott Alexander on his blog Slate Star Codex making a dichotomy between mistake theorists who think their political opponents are mistaken on factual policy questions and conflict theorists who think their political opponents are innately evil. There have been a lot of good articles on the subject on every side and on both the object-level and the meta-level (well, on both the meta-level and the meta-meta-level), but also many bad ones resting on mistakes (I know, I am showing my side here).
One class of pro-conflict-theory arguments that bother me a lot goes like this:
> Mistake theory can't be the correct worldview because, for example, it's historically documented that tobacco companies hired scientists to spread misinformation about whether smoking causes cancer instead of thinking about it in a rational way.
Other historical case studies used include the rise of liberal democracy, the abolition of slavery, giving women the right to vote, the end of segregation, etc.
A scientific theory that is often used in this kind of argument is J
|
a7d93be0-c6fd-44ae-8328-f0236bb2a12d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Greyed Out Options
Imagine that life is a choose-your-own adventure game.
In any moment, you have literally millions of options. At the moment I’m typing this, I could change the tab to innumerable websites, I could read any of the hundreds of books in my house, I could make myself a snack of olives, I could stand up and see how far I could jump, I could pet a cat, I could walk into my best friend’s bedroom and call them an idiot, and so on and so forth.
But, most of the time, we only think of a menu of a few dozen options—sometimes much fewer. The rest are sort of grayed out.
To a large extent, this is a good thing. Most of the options theoretically available at any given moment are very stupid. (Just ask anyone with intrusive thoughts—yes, brain, I understand I could put the lightbulb in my mouth, stop bringing it up!) But I think it’s important to think about the ways that grayed out options limit our behavior.
You can go outside in pajamas. It isn’t illegal. No one will stop you. Most of the time, no one will even comment. Sure, you might run into someone you know, but in many cities that’s not going to happen, and anyway they’re likely to assume you have a stomach flu or otherwise have some perfectly good reason for running around in pajamas. You’re unlikely to face any negative consequences whatsoever.
But when I’ve suggested this to people, they tend to object not because they have no particular reason to go places in pajamas (pajamas are very comfortable) but because people don’t do that. It’s just not on the list of available options. If you did, you’d probably feel anxious and maybe even ashamed, because it’s genuinely hard to do something that people don’t do.
To be clear, I’m not suggesting that you should go places wearing pajamas! I don’t. I’m suggesting that you consider thoughtfully which of your options are grayed out and why.
Here are some other grayed-out options I’ve observed among people I’ve met:
* Starting a conversation with a stranger.
* Asking someo
|
90a0fa1c-9b88-4382-b1c6-3cec91e7eb61
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Dishbrain and implications.
I believe that AI research has not given sufficient attention to learning directly from biology, particularly through the direct observation and manipulation of neurons in controlled environments. Furthermore, even after learning all that biology has to offer, neurons could still play a part in the post TAI world economy as they could be cheaper and faster to grow than chips are to manufacture.
Pre TAI – study neurons to greatly increase learning capability
As I have said in other places on this site, I believe that the current transformer architecture will not scale to TAI, because it does not learn fast enough or generalize well enough from data compared to biology. For example, Tesla Autopilot has been trained on over 10,000 times more data than a human encounters in their lifetime, yet it still falls short of human-level performance. I don’t think this is because of anything Tesla is doing wrong in their training. Biology or the “neural code” is still much better at generalizing quickly from high bandwidth, correlated, unstructured data.
If we could learn the details of how biology does it, we would get a massive increase in capability. One of the most prominent examples of directly controlling neurons is Cortical Labs’ Dishbrain project. With the following article and quote
> “Not only can you get meaningful learning, you get that meaningful learning incredibly rapidly, sort of in line with what you might expect for biological intelligence.”
As far as I am aware they are not directly trying to crack the neural code, but focusing on other applications, even providing an API where you can control neurons. Given the massive budgets now spent on getting to AGI, I believe there is a significant missed opportunity there. Characterizing how such neurons learn with a complete range of inputs and comparing to state of the art AI would clarify the differences.
Although it’s long been known that the brain adapts its structure to its inputs, experiments such as this
|
1f8460fa-b3b7-47ab-97e2-a75605207666
|
trentmkelly/LessWrong-43k
|
LessWrong
|
We learn long-lasting strategies to protect ourselves from danger and rejection
Take a second to imagine what being a child was like throughout most of human history. You were born with a huge and underdeveloped brain, designed for soaking in information from your surroundings like a sponge. But you weren’t able to freely follow your curiosity: even if you had loving, caring parents, you still faced frequent physical danger from nature and other people, severe scarcity, and rigid cultural norms that governed acceptable behavior within your community, with harsh penalties for stepping out of line. You had to learn fast and reliably how to stay safe, and how to stay on the good side of the adults around you, especially your parents, whose care for you was a matter of life or death.
Even after you grew up and passed the period of most acute danger, you’d still face many threats of violence and scarcity. Your ability to avoid these depended in large part on your relationships: holding a respected position within your tribe was the key pathway to a good life, whereas exclusion from your tribe was tantamount to execution. So “danger and rejection” isn’t an ad-hoc combination: our brains are primed to think of them as the same thing; and conversely, to equate safety and love. I’ll call the latter combination “security” (which I think of as a combination of “physical security” and “emotional security”, although I’ll mostly be focusing on the latter). Children are learning machines, and what they learn above all is strategies for achieving security; because the opinions of other people are so powerful, “being good” in ways which receive approval from the group is one of the central strategies they learn.
How literally should we take this story? It’s clear that describing humans as optimizing for a single goal is a big oversimplification. But it’s hard to overstate how powerful the drive for security is. Think of the many girls who override the drive to eat because part of their brain is convinced that being skinnier will make others desire and love th
|
8d6ff0bf-d925-4e96-b7ac-ae02ef0e19b6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Riffing on the agent type
Much is owed to Diffractor for Giry-pilling me at Alignable Structures, I had been struggling with type-driven expected value previously.
Epistemic status: took a couple days off from my master plan to think about John's selection theorems call to action.
We would like a type signature of agency. Scott Garrabrant provides (A→O)→A as a first approximation. You can choose one of two ideas here: 1. that an agent simply takes a belief about how actions A turn into outcomes O and returns a recommended action, or 2. that an agent takes underlying configurations of reality (containing information about how actions lead to outcomes) and tends to perform certain actions. Notice that O happens to be for "outcome", "observation", and even "ontology", which is nice. This signature is widely discussed in the monad literature.
Scott wrote that → primarily means causal influence and secondarily means functions. I will be mostly ignoring the causal influence idea, and I think instead of thinking of the signature from an objective perspective of it being a transcription of the underlying reality, I want to think of it from a subjective perspective of it being an assistant for implementation engineers. I think we should take a swing at being incredibly straightforward about what we mean by the type signature of agency: when I say that a type τ is the type signature of agency, I mean that if we have programs that are admitted by τ then those programs are doing all the things that interest me about agents (i.e., at τ=(A→B)→A, if we instantiate particular non-atomic propositions A and B that interact with the outside world in such a way that we can obtain proofs of (A→B)→A (which we can't do in general) in some way, then those proofs are doing all the things that interest me about agents).
In my view, the first idea (involving "belief") can be called a subjective interpretation of the type signature, and we shall explore some adjustments to make this story better, while the second i
|
ec053429-5801-4a66-8f89-13cc32dce432
|
awestover/filtering-for-misalignment
|
Redwood Research: Alek's Filtering Results
|
id: post2571
Counterfactual planning is a design approach for creating a range of
safety mechanisms that can be applied in hypothetical future AI
systems which have Artificial General Intelligence. My new paper Counterfactual Planning in AGI Systems introduces this design approach in full.
It also constructs several example AGI safety mechanisms. The key step in counterfactual planning is to use an AGI machine
learning system to construct a counterfactual world model , designed to
be different from the real world the system is in. A counterfactual
planning agent determines the action that best maximizes expected
utility in this counterfactual planning world, and then performs the
same action in the real world. Examples of AGI safety mechanisms that can be constructed using
counterfactual planning are: An agent emergency stop button, where the agent does not have a direct incentive to prevent its stop button from being pressed A safety interlock that will automatically stop the agent before it undergoes an intelligence explosion An input terminal that can be used by humans to iteratively improve
the agent's reward function while it runs, where the agent does not have a direct incentive to manipulate this improvement process A counterfactual oracle. Counterfactual planning is not a silver bullet that
can solve all AI alignment problems. While it is a technique for
suppressing strong direct incentives , it will not automatically
remove all remaining indirect incentives which can also lead to
unsafe behavior. This sequence In this sequence of Alignment Forum posts. I will give a high-level
introduction to counterfactual planning. The sequence uses text and
figures from the paper, but omits most of the detailed mathematical
definitions in the paper. I have also added some extra text not included in the paper,
observations targeted specifically at long-time LessWrong/Alignment
Forum readers. For example, in LessWrong terminology, the paper
covers subjects like agent foundations, decision theory, and the
embedded agency, but you won't find these terms being mentioned in the
paper. Use of natural and mathematical language When writing about AGI systems, one can use either natural language,
mathematical notation, or a combination of both. A natural
language-only text has the advantage of being accessible to a
larger audience. Books like Superintelligence and Human Compatible avoid the use of mathematical notation in the main
text, while making a clear an convincing case for the existence of
specific existential risks from AGI, even though these risks are
currently difficult to quantify. However, natural language has several shortcomings when it is used to
explore and define specific technical solutions for managing AGI
risks. One particular problem is that it lacks the means to
accurately express the complex types of self-referencing and indirect
representation that can be present inside online machine learning
agents and their safety components. To solve this problem, counterfactual planning introduces a compact
graphical notation. This notation unambiguously represents these
internal details by using two diagrams: a learning world diagram and
a planning world diagram . AGI safety as a policy problem Long-term AGI safety is not just a technical problem, but also a
policy problem. While technical progress on safety can sometimes be
made by leveraging a type of mathematics that is only accessible to
handful of specialists, policy progress typically requires the use of
more accessible language. Policy discussions can move faster, and
produce better and more equitable outcomes, when the description of a
proposal and its limitations can be made more accessible to all
stakeholder groups. One aim of the paper is therefore to develop a comprehensive vocabulary
for describing certain AGI safety solutions, a vocabulary that is as
accessible as possible. However, the vocabulary still has too much
mathematical notation to be accessible to all members of any possible
stakeholder group. So the underlying assumption is that each
stakeholder group will have access to a certain basic level of
technical expertise. At several points in the paper, I have also included comments that aim
to explain and demystify the vocabulary and concerns of some specific
AGI related sub-fields in mathematics, technology, and philosophy. Agent Foundations On this forum and in several AI alignment/safety agendas, it is common
to see calls for more work on agent
foundations . Counterfactual planning can be read as a work on agent foundations: it
offers a new framework for understanding and reasoning about agents.
It provides a specific vantage point on the internal construction of
machine learning based agents. This vantage point was designed to make
certain safety problems and solutions more tractable. At the same time, counterfactual planning takes a design stance . It
does not try to understand or model all possible forms of agency, for
example it is not concerned with modeling agent-like behavior in
humans or organizations. The main interest is in clarifying how we
can design artificial agents that have certain safety properties. In the machine learning community, it is common to use agent models
where the agent is as a mechanism designed to approximate a certain
function as well as possible. The agent model in counterfactual
planning also treats machine learning as a function
approximation, but it constructs the agent by building additional
moving parts around the function approximation system. By
re-arranging these moving parts, compared to the standard
configuration that is implicitly assumed in most agent models, we can
create a counterfactual planner . This re-arrangement can also be interpreted as constructing an agent
that will use a customized decision theory , a decision theory that
is explicitly constructed to be flawed, because it will make the agent
ignore certain facts about the environment it is in. MIRI's discussion of decision
theory puts a strong emphasis on the problem an agent's
machine reasoning system may get
deeply confused and possibly dangerous when it does the wrong
type of self-referential reasoning. The solution to this problem
seems obvious to me: don't build agents that do the wrong type of
self-referential reasoning! So a lot of the paper is about describing and
designing complex forms of self-referencing. The paper (and this sequence) breaks with the LessWrong/Alignment
Forum mainstream, in that I have consciously avoided using the
terminology and examples of self-referential reasoning failure most
frequently used on this forum. Instead, I have aimed to frame
everything in the terminology of mainstream computer science and
machine learning. To readers of this forum, I hope that this will
make it more visible that mainstream academia has also been working on
these problems too, using a different terminology. Defining counterfactuals In some parts of the mainstream machine learning community,
counterfactuals have been routinely used to improve the performance of
the machine learning system, for example in poker, see this paper
from
2007 and in computational advertizing, see this paper from
2013 . In the computational fairness community counterfactuals have been
proposed as a way to define and compute fair decisions, in this key 2017 paper . In the fairness
community, there is also significant discussion about how easy or difficult
it may be to compute such counterfactuals see this recent book
chapter for an overview. In both cases above, the counterfactuals being constructed are Pearl's
counterfactuals based on Causal Models, as defined by Pearl around 2000 .
I'd say that the use of Pearl's system of counterfactuals is the
de-facto standard in the mainstream machine learning community. However, in the AGI safety/alignment community,
in particular in the part of the community represented here on the
Alignment Forum, the status of Pearl's causal models and
counterfactuals is much more complicated. The 2015 MIRI/FHI paper Corrigibility identified counterfactual reasoning as a possible solution direction
for creating AGI agent stop buttons. Counterfactual
reasoning is an open problem on MIRI's 2015 technical research
agenda . But much of the
work on counterfactual reasoning which has been
posted here has not engaged directly with Pearl's work.
The impression I have is that,
since 2015, several posters have been trying to define or clarify
notions of counterfactuals which are explicitly different from Pearl's
system.
These attempts have often used Bayesian updates as a building blocks.
This work on non-Pearlian counterfactuals has lead to interesting but also
sometimes confusing discussions and comment threads, see for example here . One partial explanation for this state of affairs may be that MIRI's
approach to alignment research is to take high-risk bets on developing
completely novel breakthroughs. They prefer to look for solutions in
places where the more mainstream academic and machine learning
communities are not looking. There is also the factor that Pearl's work is somewhat inaccessible.
Pearl's presentation of his mathematical system, both in his papers
and in the book Causality , seems
to have been written mainly for an audience of professional
statisticians, for example statisticians working in the medical field.
The presentation is not very accessible to a more general technical
audience. Pearl and Mackenzie's The Book of
Why is more accessible, but at the
cost of omitting the mathematical foundations of the notation. Nevertheless, in my experience, Pearl's mathematical system of causal
models and counterfactuals is both powerful and useful. So I have
built on this somewhat mainstream system to define counterfactual
planning in machine learning agents. But in the paper I have departed from Pearl's work by defining his
mathematical counterfactuals from scratch, in a way that explicitly
avoids the use of Pearl's framing, justifications, and explanations.
I depart from Pearl's framing by using the notion of mathematically
constructed world models as a central organizing theme. I am also building on recent work by Tom Everitt and others, who have been promoting the use of Pearl
causal models, and their graphical representation as Causal Influence
Diagrams, in the AGI safety community. Everitt et al. present
Causal Influence Diagrams primarily as an analytical device, to
explore the incentives of an agent .
I have gone one step further, and use the diagrams as a device to
fully define entire agents. This turns the diagrams into design
tools . In section 8 of the paper I show a design process that
creates indifference by redrawing the agent's planning world diagram.
|
48a1f52b-5a2d-4794-8009-07beec7e2416
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Decision theory and "winning"
With much help from crazy88, I'm still developing my Decision Theory FAQ. Here's the current section on Decision Theory and "Winning". I feel pretty uncertain about it, so I'm posting it here for feedback. (In the FAQ, CDT and EDT and TDT and Newcomblike problems have already been explained.)
> One of the primary motivations for developing TDT is a sense that both CDT and EDT fail to reason in a desirable manner in some decision scenarios. However, despite acknowledging that CDT agents end up worse off in Newcomb's Problem, many (and perhaps the majority of) decision theorists are proponents of CDT. On the face of it, this may seem to suggest that these decision theorists aren't interested in developing a decision algorithm that "wins" but rather have some other aim in mind. If so then this might lead us to question the value of developing one-boxing decision algorithms.
>
> However, the claim that most decision theorists don’t care about finding an algorithm that “wins” mischaracterizes their position. After all, proponents of CDT tend to take the challenge posed by the fact that CDT agents “lose” in Newcomb's problem seriously (in the philosophical literature, it's often referred to as the Why ain'cha rich? problem). A common reaction to this challenge is neatly summarized in Joyce (1999, p. 153-154 ) as a response to a hypothetical question about why, if two-boxing is rational, the CDT agent does not end up as rich as an agent that one-boxes:
>
> > Rachel has a perfectly good answer to the "Why ain't you rich?" question. "I am not rich," she will say, "because I am not the kind of person [Omega] thinks will refuse the money. I'm just not like you, Irene [the one-boxer]. Given that I know that I am the type who takes the money, and given that [Omega] knows that I am this type, it was reasonable of me to think that the $1,000,000 was not in [the box]. The $1,000 was the most I was going to get no matter what I did. So the only reasonable thing for me to do was t
|
4ef9ce72-8622-4c2b-9f52-5406ee91f76e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
I'd take it
Out-of-context quote of the day:
> "...although even $10 trillion isn't a huge amount of money..."
From Simon Johnson, Director of the IMF's Research Department, on "The Rise of Sovereign Wealth Funds".
So if you had $10 trillion, what would you do with it?
|
20d411fc-4657-41a3-a16c-f8539caab53f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
That Thing That Happened
I am emotionally excited and/or deeply hurt by what st_rev wrote recently. You better take me seriously because you've spent a lot of time reading my posts already and feel invested in our common tribe. Anecdote about how people are tribal thinkers.
> That thing that happened shows that everything I was already advocating for is correct and necessary. Indeed it is time for everyone to put their differences aside and come together to carry out my recommended course of action. If you continue to deny what both you and I know in our hearts to be correct, you want everyone to die and I am defriending you.
I don't even know where to begin. This is what blueist ideology has been workign towards for decades if not millennia, but to see it written here is hard to stomach even for one as used to the depravity caused by such delusions as I am. The lack of socially admired virtues among its adherents is frightening. Here I introduce an elaborate explanation of how blueist domination is not just completely obvious and a constant thorn in the side of all who wish more goodness but is achieved by the most questionable means often citing a particular blogger or public intellectual who I read in order to show how smart I am and because people I admire read him too. Followed by an appeal to the plot of a movie. Anecdote from my personal life. If you are familiar with the obscure work of an academic taken out of context and this does not convince you then you are clearly an intolerant sexual deviant engaging in motivated cognition.
> Consider well: do you want to be on the wrong side of history? If you persist, millions or billions of people you will never meet will be simultaneously mystified and appalled that an issue so obvious caused such needless contention. They will argue whether you were motivated more by stupidity, malice, raw interest, or if you were a helpless victim of the times in which you lived. Characters in fiction set in your era will inevitably be on (or at wors
|
6ddac8e0-777c-48ab-9c05-15f2434f5bd2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Two senses of “optimizer”
The word “optimizer” can be used in at least two different ways.
First, a system can be an “optimizer” in the sense that it is solving a computational optimization problem. A computer running a linear program solver, a SAT-solver, or gradient descent, would be an example of a system that is an “optimizer” in this sense. That is, it runs an optimization algorithm. Let “optimizer_1” denote this concept.
Second, a system can be an “optimizer” in the sense that it optimizes its environment. A human is an optimizer in this sense, because we robustly take actions that push our environment in a certain direction. A reinforcement learning agent can also be thought of as an optimizer in this sense, but confined to whatever environment it is run in. This is the sense in which “optimizer” is used in posts such as this. Let “optimizer_2” denote this concept.
These two concepts are distinct. Say that you somehow hook up a linear program solver to a reinforcement learning environment. Unless you do the “hooking up” in a particularly creative way there is no reason to assume that the output of the linear program solver would push the environment in a particular direction. Hence a linear program solver is an optimizer_1, but not an optimizer_2. On the other hand, a simple tabular RL agent would eventually come to systematically push the environment in a particular direction, and is hence an optimizer_2. However, such a system does not run any internal optimization algorithm, and is therefore not an optimizer_1. This means that a system can be an optimizer_1 while not being an optimizer_2, and vice versa.
There are some arguments related to AI safety that seem to conflate these two concepts. In Superintelligence (pg 153), on the topic of Tool AI, Nick Bostrom writes that:
> A second place where trouble could arise is in the course of the software’s operation. If the methods that the software uses to search for a solution are sufficiently sophisticated, they may include provisio
|
55052d11-948a-4dda-830d-b24211ce5fe3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Traditional Capitalist Values
Followup to: Are Your Enemies Innately Evil?, Policy Debates Should Not Appear One-Sided
> "The financial crisis is not the crisis of capitalism. It is the crisis of a system that has distanced itself from the most fundamental values of capitalism, which betrayed the spirit of capitalism."
> -- Nicolas Sarkozy
During the current crisis, I've more than once heard someone remarking that financial-firm CEOs who take huge bonuses during the good years and then run away when their black-swan bets blow up, are only exercising the usual capitalist values of "grab all the money you can get".
I think that a fair amount of the enmity in the world, to say nothing of confusion on the Internet, stems from people refusing to contemplate the real values of the opposition as the opposition sees it. This is something I've remarked upon before, with respect to "the terrorists hate our freedom" or "the suicide hijackers were cowards" (statements that are sheerly silly).
Real value systems - as opposed to pretend demoniacal value systems - are phrased to generate warm fuzzies in their users, not to be easily mocked. They will sound noble at least to the people who believe them.
Whether anyone actually lives up to that value system, or should, and whether the results are what they are claimed to be; if there are hidden gotchas in the warm fuzzy parts - sure, you can have that debate. But first you should be clear about how your opposition sees itself - a view which has not been carefully optimized to make your side feel good about its opposition. Otherwise you're not engaging the real issues.
So here are the traditional values of capitalism as seen by those who regard it as noble - the sort of Way spoken of by Paul Graham, or P. T. Barnum (who did not say "There's a sucker born every minute"), or Warren Buffett:
* Make things that people want, or do things that people want done, in exchange for money or other valuta. This is a great and noble and worthwhile ende
|
c6ac49f0-91c7-4ba7-a522-bc2c15df0045
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Rough Sketch for Product to Enhance Citizen Participation in Politics
For politics and governance in the US, does there exist a tool that:
* Prompts the user to enter their rough location (e.g., town, county, state)
* Prompts the user to select interest keywords (e.g., housing, animal welfare, cannabis)
* Lists pending local (town / county), state, and federal level laws and regulations w.r.t. these interests
* Lists current local (town / county), state, and federal level laws and regulations w.r.t. these interests
* Includes summaries of current local (town / county), state, and federal level laws and regulations for accessibility
* Lists special interest groups that typically support/counter laws and regulations relating to these interests at the local (town / county), state, and federal level
* Lists ways in which the user could influence laws and regulations w.r.t. these interests (e.g., a step-by-step tutorial for participating in a certain election)
If this tool doesn't exist, how much value would people get from it if it existed? How difficult would it be to implement each part? (please point me to any tools / organizations that roughly fulfill the duties outlined in the bullet points)
Lastly, w.r.t. the point
* Includes summaries of current local (town / county), state, and federal level laws and regulations for accessibility
I think GPT-X might work well for summarizing and distilling legal language - has this been done already?
|
dce0b47a-9d38-477b-a828-bc76cfad555c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Slack Double Crux, or how to negotiate with yourself
This is a post mostly about Slack. Slack in the sense of Zvi, Slack as the freedom to not be bound by an obligation to have to do anything. Slack is generally accepted to be good and worth pursuing. This post is about the phenomenon of actions which increase Slack also tending to decrease Slack, and what to do about that.
This post is also about IFS, because Internal Family Systems theory is, in my opinion, among the best psychotechnology available right now.
This is also my first post, I hope it is to someone's liking.
A very simple example for an action that both decreases and increases slack: Having certain days/hours in which you are not allowed to play video games.
This increases slack, as long as you are addicted, aka got got by videogames, being able to not play them while you desire to, opens up a window in which you are free to do anything else, aka the desirable Slack.
The problem however is that not being allowed to do something also naturally decreases Slack. Maybe playing a video game on this day, on this hour is already the perfect economic decision, something which Slack is supposed to help you achieve. Maybe you are so stressed that you need to decompress, and a video game is the perfect tool at your disposal. Maybe the video game is the artsy type, more Disco Elysium than Fortnite, and playing it will broaden your horizon, give you access to new culture or thought models, enrich your creativity, or do any of the other nice things that good art tends to do, and that option is superior to any of your other options at the time and fits you needs best.
So, it stands to reason, that most actions actually increase and decrease Slack at different rates and in different aspects. Every decision to modify your behaviour will set rules for you to follow, which will decrease Slack, but benefit you by culling negative behaviour, which increases Slack. Abolishing rules does the inverse. So now that the impact on every action on Slack is fuzzy and complicated
|
af6e8500-b225-49df-b553-d3513d08e6af
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Crowley on Religious Experience
Reply to: The Sacred Mundane, BHTV: Yudkowsky vs. Frank on "Religious Experience"
Edward Crowley was a man of many talents. He studied chemistry at Cambridge - a period to which he later attributed his skeptical scientific outlook - but he soon abandoned the idea of a career in science and turned to his other passions. For a while he played competitive chess at the national level. He took to mountain-climbing, and became one of the early 20th century's premier mountaineers, co-leading the first expedition to attempt K2 in the Himalayas. He also enjoyed writing poetry and travelling the world, making it as far as Nepal and Burma in an era when steamship was still the fastest mode of transportation and British colonialism was still a thin veneer over dangerous and poorly-explored areas.
But his real interest was mysticism. He travelled to Sri Lanka, where he studied meditation and yoga under some of the great Hindu yogis. After spending several years there, he achieved a state of mystical attainment the Hindus call dhyana, and set about trying to describe and promote yoga to the West.
He was not the first person to make the attempt, but he was certainly the most interesting. Although his parents were religious fanatics and his father a fundamentalist preacher, he himself had been an atheist since childhood, and he considered the vast majority of yoga to be superstitious claptrap. He set about eliminating all the gods and chants and taboos and mysterian language, ending up with a short system of what he considered empirically validated principles for gaining enlightenment in the most efficient possible way.
Reading Crowley's essay on mysticism and yoga at age seventeen rewrote my view of religion. I had always wondered about eastern religions like Buddhism and Hinduism, which seemed to have some underlying truth to all their talk of "enlightenment" and "meditation" but which seemed too vague and mysterious for my liking. Crowley stripped the mystery away in one fel
|
2f6f2451-b0fb-477d-9ed9-f82946df44c4
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Subspace
A subspace $U=(F_U, V_U)$ of a [https://arbital.com/p/3w0](https://arbital.com/p/3w0) $W=(F_W, V_W)$ is a vector space where $F_U = F_W$ and $V_U$ is a [https://arbital.com/p/subgroup](https://arbital.com/p/subgroup) of $V_W,$ and $V_U$ is [closed](https://arbital.com/p/) under scalar multiplication.
|
ffe19e01-5687-4df2-9593-74bce2ff6f0f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
We must be very clear: fraud in the service of effective altruism is unacceptable
I care deeply about the future of humanity—more so than I care about anything else in the world. And I believe that Sam and others at FTX shared that care for the world.
Nevertheless, if some hypothetical person had come to me several years ago and asked “Is it worth it to engage in fraud to send billions of dollars to effective causes?”, I would have said unequivocally no.
At this stage, it is quite unclear just from public information exactly what happened to FTX, and I don't want to accuse anyone of anything that they didn't do. However, I think it is starting to look increasingly likely that, even if FTX's handling of its customer's money was not technically legally fraudulent, it seems likely to have been fraudulent in spirit.
And regardless of whether FTX's business was in fact fraudulent, it is clear that many people—customers and employees—have been deeply hurt by FTX's collapse. People's life savings and careers were very rapidly wiped out. I think that compassion and support for those people is very important. In addition, I think there's another thing that we as a community have an obligation to do right now as well.
----------------------------------------
Assuming FTX's business was in fact fraudulent, I think that we—as people who unknowingly benefitted from it and whose work for the world was potentially used to whitewash it—have an obligation to condemn it in no uncertain terms. This is especially true for public figures who supported or were associated with FTX or its endeavors.
I don't want a witch hunt, I don't think anyone should start pulling out pitchforks, and so I think we should avoid a focus on any individual people here. We likely won't know for a long time exactly who was responsible for what, nor do I think it really matters—what's done is done, and what's important now is making very clear where EA stands with regards to fraudulent activity, not throwing any individual people under the bus.
Right now, I think the best course of a
|
c252e03a-c3ca-495a-8f17-9329defcd645
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Approval-directed bootstrapping
Approval-directed behavior works best when the overseer is very smart. Where can we find a smart overseer?
One approach is *bootstrapping*. By thinking for a long time, a weak agent can oversee an agent (slightly) smarter than itself. Now we have a slightly smarter agent, who can oversee an agent which is (slightly) smarter still. This process can go on, until the intelligence of the resulting agent is limited by technology rather than by the capability of the overseer. At this point we have reached the limits of our technology.
This may sound exotic, but we can implement it in a surprisingly straightforward way.
Suppose that we evaluate Hugh’s approval by predicting what Hugh would say if we asked him; the rating of action *a* is what Hugh would say if, instead of taking action *a,* we asked Hugh, “How do you rate action *a*?”
Now we get bootstrapping almost for free. In the process of evaluating a proposed action, Hugh can consult Arthur. This new instance of Arthur will, in turn, be overseen by Hugh—and in this new role Hugh can, in turn, be assisted by Arthur. In principle we have defined the entire infinite regress before Arthur takes his first action.
We can even learn this function by examples — no elaborate definitions necessary. Each time Arthur proposes an action, we actually ask Hugh to evaluate the action with some probability, and we use our observations to train a model for Hugh’s judgments.
In practice, Arthur might not be such a useful assistant until he has acquired some training data. As Arthur acquires training data, the Hugh+Arthur system becomes more intelligent, and so Arthur acquires training data from a more intelligent overseer. The bootstrapping unfolds over time as Arthur adjusts to increasingly powerful overseers.
---
*This was originally posted [here](https://ai-alignment.com/approval-directed-bootstrapping-5d49e886c14f) on 21st December 2014.*
*Tomorrow's AI Alignment Forum sequences will take a break, and tomorrow's post will be Issue #34 of the Alignment Newsletter.*
*The next post in this sequence is 'Humans consulting HCH', also released today.*
|
8aad11eb-a610-436d-ae1d-5d2510a17c97
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Humans interpreting humans
In a previous post, I showed how, given certain normative assumptions, one could distinguish agents H for whom anchoring was a bias, from those H′ for which it was a preference.
But agent H′ looks clearly ridiculous - how could anchoring be a bias, it makes no sense. And I agree with that assessment! H′'s preferences make no sense - if we think of it as a human.
Humans model each other in very similar ways
This is another way in which I think we can extract human preferences: using the fact that human models of each other, and self-models, are all incredibly similar. Consider the following astounding statements:
* If somebody turns red, shouts at you, then punches you in the face, they are probably angry at you.
* If somebody is drunk, they are less rational at implementing long-term plans.
* If somebody close to you tells you an intimate secret, then they probably trust you.
Most people will agree with all those statements, to a large extent - including the "somebody" being talked about. But what is going on here? Have I not shown that you can't deduce preferences or rationality from behaviour? It's not like we've put the "somebody" in an FMRI scan to construct their internal model, so how do we know?
The thing is, that natural selection is lazy, and a) different humans use the same type of cognitive machinery to assess each other, and b) individual humans tend to use their own self-assessment machinery to assess other humans. Consequently, there tends to be large agreement between our own internal self-assessment models, our models of other people, other people's models of other people, and other people's self-assessment models of themselves:
This agreement is not perfect, by any means - I've mentioned that it varies from culture to culture, individual to individual, and even within the same individual. But even so, we can add the normative assumption:
* β: If H is a human and G another human, then G's models of H's preferences and rationality are in
|
a0077975-55a4-431b-a534-dbd07bd96ade
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Book Recommendations for social skill development?
I have come to a realisation a bit later than I should have. Although I am still quite young and definitely have time to act on this realisation now, I wish I had started sooner.
I am studying to become a teacher, and I hope to go into education policy later, with quite some large ambition in mind. And yet, my social skills are quite poor, and I have hardly any charisma. I seek to change this. I know that much of the cause of my poor social skills is never having created or found opportunities to develop them in the natural developmental path of a child/teenager.
And so I take to reading books in order to learn, and then apply what I read in life. I suppose I could just sit and think and figure out what to improve, but in the name of efficiency I want to at least start with the guidance of someone who actually knows what they're talking about.
So, any book recommendations that explicitly teach social skills and charisma? I've started working through Just Listen by Mark Goulston, which so far seems quite valuable.
|
78a54f00-f58a-4361-b12c-e1f7c32ca1b7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Defining causal isomorphism
I previously posted this question in another discussion, but it didn't get any replies so, since I now have enough karma, I've decided to make it my first "article".
> This brings up something that has been on my mind for a long time. What are the necessary and sufficient conditions for two computations to be (homeo?)morphic? This could mean a lot of things, but specifically I'd like to capture the notion of being able to contain a consciousness, so what I'm asking is, what we would have to prove in order to say program A contains a consciousness --> program B contains a consciousness. "pointwise" isomorphism, if you're saying what I think, seems too strict. On the other hand, allowing any invertible function to be a _morphism doesn't seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers. Restricting our functions by, say, resource complexity, also seems to lead to both similar and unrelated issues...
Any takers?
|
b63caf85-df5d-43e9-9b50-cbfee10b5e32
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Gates 2017 Annual letter
|
ad3b7c3f-660c-40ec-a7e2-5266e259ed17
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Lightcone Infrastructure/LessWrong is looking for funding
Lightcone Infrastructure is looking for funding and are working on the following projects:
* We run LessWrong, the AI Alignment Forum, and have written a lot of the code behind the Effective Altruism Forum.
* During 2022 and early 2023 we ran the Lightcone Offices, and are now building out a campus at the Rose Garden Inn in Berkeley, where we've been doing repairs and renovations for the past few months.
* We've also been substantially involved in the Survival and Flourishing Fund's S-Process (having written the app that runs the process) and are now running Lightspeed Grants.
* We also pursue a wide range of other smaller projects in the space of "community infrastructure" and "community crisis management". This includes running events, investigating harm caused by community institutions and actors, supporting programs like SERI MATS, and maintaining various small pieces of software infrastructure.
If you are interested in funding us, please shoot me an email at habryka@lesswrong.com (or if you want to give smaller amounts, you can donate directly via PayPal here).
Funding is quite tight since the collapse of FTX, and I do think we work on projects that have a decent chance of reducing existential risk and generally making humanity's future go a lot better, though this kind of stuff sure is hard to tell. We are looking to raise around $3M to $6M for our operations in the next 12 months. [1]
Edit (June 23): I've now given a lot more details on how we operate and what we work on in the comments. I would recommend checking them out if you want to more context on our work.
1. ^
Two draft readers of this post expressed confusion that Lightcone needs money, given that we just announced a funding process that is promising to give away $5M in the next two months. The answer to that is that we do not own the money moved via Lightspeed Grants and are only providing grant recommendations to Jaan Tallinn and other funders.
We do separately apply for
|
70ec5820-0e0f-49f6-83d1-9c692d37e909
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Against Instrumental Convergence
[Instrumental convergence](https://arbital.com/p/instrumental_convergence/) is the idea that every sufficiently intelligent agent would exhibit behaviors such as self preservation or acquiring resources. This is a natural result for maximizers of simple utility functions. However I claim that it is based on a false premise that agents must have goals.
### What are goals?
For agents constructed as utility maximizers, the goal coincides with utility maximization. There is no doubt that for most utility functions, a utility maximizer would exhibit instrumental convergence. However, I claim that most minds in the mindspace are not utility maximizers in the usual sense.
It may be true that for every agent there is a utility function that it maximizes, in the spirit of VNM utility. However these utility functions do not coincide with goals in the sense that instrumental convergence requires. These functions are merely encodings of the agent's decision algorithm and are no less complex than the agent itself. No simple conclusions can be made from their existence.
Humans exhibit goals in the usual sense and arguably have VNM utility functions. Human goals often involve maximizing some quantity, e.g. money. However explicit human goals represent only a small fraction of their total utility computation. Presumably, the goals explain the extent to which some humans exhibit instrumental convergence, and the rest of the utility function explains why we haven't yet tiled the universe with money.
What about non-human non-utility-maximizer agents? Certainly, some of them can still exhibit instrumental convergence, but I will argue that this is rare.
### What is the average mind in the mindspace?
The "mindspace" refers to some hypothetical set of functions or algorithms, possibly selected to meet some arbitrary definition of intelligence. I claim that most minds in the mindspace do not share any properties that we have not selected for. In fact, most minds in the mindspace do nothing even remotely useful. Even if we explicitly selected a random mind from a useful subset of the mindspace, the mind would most likely do nothing more than the bare minimum we required. For example, if we search for minds that are able to run a paperclip factory, we will find minds that run paperclip factories well enough to pass our test, but not any better. Intelligence is defined by the ability to solve problems, not by the degree of agency.
Without a doubt, somewhere in the mindspace there is a mind that will run the paperclip factory, acquire resources, and eventually tile the universe with paperclips, however it is not the only mind out there. There is also a mind that runs the paperclip factory and then when it has nothing better to do, shuts down, sits in an empty loop, dreams of butter, or generates bad harry potter fanfic.
With this in mind, random searches in the mindspace are relatively safe, even if the minds we find aren't well aligned. Though it would be lovely to be certain that a new mind is not the "tile the universe with paperclips" kind.
### Caveats and half-baked ideas
* This post comes from trying to understand why I don't find the threat of AI as inevitable as some suggest. In other words, it's a rationalization.
* I have a limited understanding of what MIRI does and what assumptions it has. I'm under the impression that they are primarily working on utility maximizers, and that instrumental convergence is important to them. But it's likely that points similar to mine have been made and either accepted or countered.
* In the post I make empirical claims about the composition of the mindspace, I have obviously not verified them, if they are even verifiable. The claims seem trivial to me, but may well not be that strong.
* While simple utility maximizing minds are not common, it's possible that they are the smallest minds that can pass our tests, or that they have other special properties that would make semi-random searches find them more often than we'd like.
|
56569a53-a13e-4cf7-855c-c5f2d6c6b02d
|
trentmkelly/LessWrong-43k
|
LessWrong
|
You can be wrong about what you like, and you often are
Meta: I'm not saying anything new here. There has been a lot of research on the topic, and popular books like Stumbling on Happiness have been written. Furthermore, I don't think I have explained any of this particularly well, or provided particularly enlightening examples. Nevertheless, I think these things are worth saying because a) a lot of people have an "I know what I like" attitude, and b) this attitude seems pretty harmful. Just be sure to treat this as more of an exploratory post than an authoritative one.
I think that the following attitudes are very common:
* I'm just not one of those people who enjoys "deeper" activities like reading a novel. I like watching TV and playing video games.
* I'm just not one of those people who likes healthy foods. You may like salads and swear by them, but I am different. I like pizza and french fries.
* I'm just not an intellectual person. I don't enjoy learning.
* I'm just not into that early retirement stuff. I need to maintain my current lifestyle in order to be happy.
* I'm just not into "good" movies/music/art. I like the Top 50 stuff.
Imagine what would happen if you responded to someone who expressed one of these attitudes by saying "I think that you're wrong." Often times, the response you'll get is something along the lines of:
> Who are you to tell me what I do and don't like? How can you possibly know? I'm the one who's in my own head. I know how these things make me feel.
When I think about that response, I think about optical illusions. Consider this one:
When I think about that response, I think about the following dialog:
> Me: A and B are the same shade of gray.
> Person: No they're not! WTF are you talking about? How can you say that they are? I can see with my eyes that they're not!
I understand the frustration. It feels like they're different shades. It feels like it is stupidly obvious that they're different shades.
And if feels like you know what you like.
But sometimes, sometimes your
|
a24489d5-2cf1-47a5-bb68-23036759f0be
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
How I Lost 100 Pounds Using TDT
Background Information: [Ingredients of Timeless Decision Theory](/lw/15z/ingredients_of_timeless_decision_theory/)
Alternate Approaches Include: [Self-empathy as a source of “willpower”](/lw/2yd/selfempathy_as_a_source_of_willpower/), [Applied Picoeconomics](/lw/ep/applied_picoeconomics/), [Akrasia, hyperbolic discounting, and picoeconomics](/lw/6c/akrasia_hyperbolic_discounting_and_picoeconomics/), [Akrasia Tactics Review](/lw/1sm/akrasia_tactics_review/)
Standard Disclaimer: [Beware of Other-Optimizing](/lw/9v/beware_of_otheroptimizing/)
Timeless Decision Theory (or TDT) allowed me to succeed in gaining control over when and how much I ate in a way that previous attempts at precommitment had repeatedly failed to do. I did so well before I was formally exposed to the concept of TDT, but once I clicked on TDT I understood that I had effectively been using it. That click came from reading Eliezer’s shortest summary of TDT, which was:
>
> The one-sentence version is: Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation
>
>
>
You can find more [here](/lw/15z/ingredients_of_timeless_decision_theory/) but my recommendation at least at first is to stick with the one sentence version. It is as simple as it can be, but no simpler.
Utilizing TDT gave me several key abilities that I previously lacked. The most important was realizing that what I chose now would be the same choice I would make at other times under the same circumstances. This allowed me to compare having the benefits now to paying the costs now, as opposed to paying costs now for future benefits later. This ability allowed me to overcome [hyperbolic discounting](http://en.wikipedia.org/wiki/Hyperbolic_discounting). The other key ability was that it freed me from the need to explicitly stop in advance to make precommitements each time I wanted to alter my instinctive behavior. Instead, it became automatic to make decisions in terms of which rules would be best to follow.
With that as background, this is how I made it happen:
I was walking home from class along my usual route I had made a habit while doing this of stopping into Famiglia Pizza and ordering garlic knots. I like garlic knots quite a bit, but I also hated being fat and the way being fat made me feel. Things weren’t quite as bad on that front as they’d been a few years before but they were still extraordinarily bad. I thought about my impending solace and thought to myself: You wouldn’t be so fat if you didn’t keep buying these garlic knots every day.
I thought about that for a second, realized it was trivially true and then wondered to myself whether it was worth it. If I never stopped for the knots I would weigh less and feel better, but I wouldn’t have any knots. Even worse, I wouldn’t have any garlic. But would I rather enjoy today the full effect of never having had the knots, in exchange for not having any? Once I asked the question that way the answer came back a resounding yes. I didn’t know how much it would matter, but the calculation wasn’t remotely close. I walked right past the pizza place and never stopped in there for a snack again.
Using this method seemed like the most useful thing I’d come up with in some time, so I quickly extended it to other decisions starting with the rest of my diet. For each meal I would consume, I decided what quantity was worth it and forbade myself from ever consuming more. I motivated myself to stick to that rule in the face of hyperbolic discounting by reminding myself that I would make the same decision next time that I was making now, so I was deciding what action I would always take in this situation. More generally, sticking to the rules I’d decided to follow meant I would stick to rules I’d decided to follow, which was clearly an extremely valuable asset to have on my side.
I used two other major rules in what I like to call the “Don’t Eat So Goddamn Much, Shut Your Pie Hole” diet. The first was to cut down from three meals a day to two and eliminate all snacks except water, cutting my consumption by more than a third. I’d had practice skipping meals in the past and realized that skipping dinner was far less painful than it looked; within a few weeks I stopped getting hungry at night. The other change was to weigh myself daily and alter how draconian the rules were based on my current weight relative to my current baseline. If I was below the baseline, I’d lower the baseline and give myself a chance to cheat a little. If I was above it by too much I would cut out all meal options that weren’t “wins” in the sense that they had more calories than my average.
I tried incorporating exercise into this program but made the discovery many others have made that exercise didn’t correlate with weight loss. Exercise makes you better at doing exercise so long as you keep doing exercise, but it had no measurable effect on my mission so I decided to let that wait until after the mission was complete. Even then I found several exercise programs I tried to be not worth it compared to not having one, or found that they became so over time. Eventually I was able to find a trainer and I remain happy with that aside from the cost. I also considered changing what I ate, but found that beyond cutting out the worst choices that it was neither necessary nor worth the cost.
The last obstacle on the journey was that as I lost more and more I started to feel worse rather than better due to all of the excess skin that doesn’t go away on its own. It was only after I’d lost all the weight and [had the resulting skin removal surgery](http://www.emedicinehealth.com/excess_skin_removal_after_extreme_weight_loss/article_em.htm) that I suddenly got up and felt genuinely good about how I looked and felt for the first time in my life. I’ve since managed to relax a number of the rules but was never concerned I wouldn’t do what was necessary to keep myself on track.
Since then I’ve used similar techniques and rules in a wide variety of areas of life. It was only years later reading Less Wrong that I realized that I’d effectively been employing inter-temporal Timeless Decision Theory. That realization allowed me to better understand and formalize what I had done, and gave me a better framework for explaining it to others. A common and justified criticism of using TDT in everyday life rather than as a theoretical construct is to ask where one can find another TDT agent, or indeed any agent sufficiently causally linked to you so as to allow you to utilize that link. My answer to that is that whether or not there is someone else you are linked to yourself. You can be that other agent, the recognition of which can allow you to win and win big.
I am fully aware that to a first approximation dieting attempts that follow similar patterns never work. Most people do not have the willpower necessary to sustain them, or otherwise suffer too much to choose to remain on the diet long term. There are powerful forces working against such an attempt. My working hypothesis is that I had five unusual things working in my favor: I have extraordinarily strong willpower in such areas, I already had strong affinity for rule setting and abiding, I fully believed in what I was doing, I had a life situation that allowed me to experience temporary discomfort due to hunger and I thought of all changes from the beginning as permanent. At least some of these advantages are things that can be learned. If anyone is capable of following in my footsteps, it would be Less Wrong readers. In [New York’s Less Wrong group](http://www.meetup.com/Less-Wrong-Overcoming-Bias-NYC/) especially a lot of us have had success with various different approaches, and I think that developing mental techniques is the best way to enhance your chance of success.
|
766b1014-2b21-48a4-b8b5-12dc47191f52
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Neural Nets in Python 1
Introduction
This post is an attempt to explain how to write a neural network in Python using numpy. I am obviously not the first person to do this. Almost all of the code is here adapted from Michael Nielsen's fantastic online book Neural Networks and Deep Learning . Victor Zhou also has a great tutorial in Python . Why am I trying to do the same? Partially, it's for my own benefit, cataloging my code so I can refer back to it later in a form more captivating than a mere docstring. Also partially, I think I can share a few intuitions which make the backpropagation equations a lot easier to derive.
Okay, so here's a typical picture of a neural network:
If you're new to all this: A neural network is a function that takes in an input vector (or matrix) and outputs another vector (or matrix). The input starts at the leftmost vertical layer of nodes and then gets transformed, via a series of operations, to the rightmost vertical layer of nodes. Each layer is a linear combination of the layer before it, followed by an activation function, which is applied to each node. In other words, a neural net is parameterized by a series of weight matrices W1,W2,.., a series of bias vectors, b1,b2,..., and an activation function a (typically a nonlinear function like tanh(x) which is applied element-wise).
The typical picture, while good for representing the general idea of a neural net, does not do a good job of showing the different operations being performed. I prefer representing a neural net as a computational graph, like below:
Here, it's clearer to see how each node is a function of the step before it. A normal three-layer neural network is given by the following composition of functions:
f0=X=input
f1=W1⋅f0+b1
f2=a(f1)
f3=W2⋅f2+b2
f4=a(f3)=^Y=predicted output
This recursive definition will make it easy to derive the backpropagation algorithm, which we'll use to train our network. It also allows us to easily unroll the function, if we want to see what's going
|
dfac2702-4c11-4b7c-b587-79635a50f90f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
The Gunas: A Model for Mental States
|
fb05c16e-75e4-4609-b957-821b0c9b9900
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Publish or Perish" (a quick note on why you should try to make your work legible to existing academic communities)
This is a brief, stylized recounting of a few conversations I had at some point last year with people from the non-academic AI safety community:[1]
Me: you guys should write up your work properly and try to publish it in ML venues.
Them: well that seems like a lot of work and we don't need to do that because we can just talk to each other and all the people I want to talk to are already working with me.
Me: What about the people who you don't know who could contribute to this area and might even have valuable expertise? You could have way more leverage if you can reach those people. Also, there is increasing interest from the machine learning community in safety and alignment... because of progress in capabilities people are really starting to consider these topics and risks much more seriously.
Them: okay, fair point, but we don't know how to write ML papers.
Me: well, it seems like maybe you should learn or hire people to help you with that then, because it seems like a really big priority and you're leaving lots of value on the table.
Them: hmm, maybe... but the fact is, none of us have the time and energy and bandwidth and motivation to do that; we are all too busy with other things and nobody wants to.
Me: ah, I see! It's an incentive problem! So I guess your funding needs to be conditional on you producing legible outputs.
Me, reflecting afterwards: hmm... Cynically,[2] not publishing is a really good way to create a moat around your research... People who want to work on that area have to come talk to you, and you can be a gatekeeper. And you don't have to worry about somebody with more skills and experience coming along and trashing your work or out-competing you and rendering it obsolete...
EtA: In comments, people have described adhering to academic standards of presentation and rigor as "jumping through hoops". There is an element of that, but this really misses the value that these standards have to the academic community. This is a long
|
e5a1bcba-91b3-4f68-bd9c-a7373caab3eb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Safe probability manipulation, superweapons, and stable self-improvement research
When I first discovered that even UDT would fail on certain "fair" problems (via "Evil" decision problems in provability logic), I was disappointed. It seemed to make things harder, in that there was no ideal decision theory, even on "fair" problems.
But then I realised I'd been handed a superweapon of extreme power. It's a tool that allows to constrain an AI of arbitrary intelligence to have the probabilistic beliefs that we desire.
For instance, take the "X" procedure that I used in the rather under-defined "counterfactual" post. We want the real probability of X=0 to be low, but for the AI to be unable to influence it (and, incidentally, unable to calculate it precisely).
Suppose that X could be 0, 1, etc... up to n. The AI runs a certain probability algorithm P. Then the procedure for calculating X is to run P on X=0, X=1, etc... X=n, and output the m for which P was lowest. In case of a tie, it outputs the highest number.
It seems clear that the AI must put P(X=0)=P(X=1)=...=P(X=n)=1/n, however we know that X=n (since the values are tied).
Basically we can use this to force the AI to be unable to influence or estimate certain events, even if it's much smarter than us.
This might provide a partial answer to Paul Christiano's point about the point of researching stable self-improvement. Because this kind of trick is not easily stable under self-improvement (eg if the AI upgrades P to P′ which can calculate stuff about P).
Similarly, my ideas about reduced impact require using the AI's predictive ability against the AI.
These seem to be cases where we can't just build a human level AI and "let it sort out stable self-improvement". Because here we've added constraints that the AI might "want" to get rid of, and that it potentially could. So it would be useful to know these constraints are stable.
|
fbc0002b-9db5-4eef-aff6-c3814fa12c34
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Welcome to Norfolk Rationalists [Hampton Roads, Virginia]
Hi!
The meetups I organize are open to all, and unless mentioned otherwise are focused on building a social group and community in the area. You do not need special skills nor background familiarity to attend these meetups, if it seems like something you'd be interested in please don't hesitate to show up, we welcome all at our main (social) meetups!
If there are enough group members and meetup attendees, and enough interest in specific ideas then I'm happy to organize smaller more focused on something specific / productive kinds of meetups and events.
Each social meetup will include 1 or 2 suggested readings so that we can have a common shared topic to start from, but meandering as far as our interests take us is welcome!
Note that even if you are only interested in smaller, specific / focused meetups, it is strategically beneficial to build up a social rationalist community in your area, because that's where such groups and events typically draw from. Thus, if you don't want to attend social meetups here, please still reach out to me so I can connect you with anyone else who may share your same interest in the area, if they are available and willing.
For a little more info about me and this area, please see this post or click on my profile.
We have a google group! Norfolk Rationalists: norfolk-rationalists@googlegroups.com Join so that you can receive a calendar invite for any scheduled meetups.
Cheers,
Willa
|
d6b94a81-e1b4-4fc8-afad-b03c78da2d44
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI #15: The Principle of Charity
The sky is not blue. Not today, not in New York City. At least it’s now mostly white, yesterday it was orange. Even indoors, everyone is coughing and our heads don’t feel right. I can’t think fully straight. Life comes at you fast.
Thus, I’m going with what I have, and then mostly taking time off until this clears up. Hopefully that won’t be more than a few more days.
The Principle of Charity comes into play this week because of two posts, by people I greatly respect as thinkers and trust to want good things for the world, making arguments that are remarkably terrible. I wrote detailed responses to the arguments within, then realized that was completely missing the point, and deleted them. Instead, next week I plan to explain my model of what is going on there – I wish they’d stop doing what they are doing, and think they would be wise to stop doing it, but to the extent I am right about what is causing these outputs, I truly sympathize.
For a day we were all talking about a Vice story that sounded too good (or rather, too perfect) to be true, and then it turned out that it was indeed totally made up. Time to take stock of our epistemic procedures and do better next time.
TABLE OF CONTENTS
1. Introduction
2. Table of Contents
3. Language Models Offer Mundane Utility. Quite a lot, actually.
4. Language Models Don’t Offer Mundane Utility. Not with that attitude.
5. Deepfaketown and Botpocalypse Soon. Talk to your… parents?
6. Fun with Image Generation. Investors, man.
7. Vigilance. It must be eternal, until it won’t be enough.
8. Introducing. Falcon the open source model, Lightspeed Grants and more.
9. In Other AI News. Senator asks Meta the question we all ask: Llama?
10. They Took Our Jobs. First they came for the copywriters.
11. Out of the Box Thinking. A hard to detect attack on audio activation devices.
12. I Was Promised Driverless Cars. You’ll get them. Eventually.
13. If It Sounds Too Good to be True. Guess what?
14. Quiet Speculat
|
8f1f8175-5f59-484a-bba4-f3534faabaa6
|
StampyAI/alignment-research-dataset/aisafety.info
|
AI Safety Info
|
What is inner alignment?
<iframe src="https://www.youtube.com/embed/bJLcIBixGj8" title="The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
Inner alignment asks, “Is the model *trying to do* what humans have specified it should do?”, or in other words, can we robustly aim our AI optimizers[^kix.uzi4pwwm8bjh] at anything at all?
This is distinguished from [outer alignment](https://www.lesswrong.com/tag/outer-alignment), which deals with the problem of ‘what should we aim the AI optimizer at?’ In other words, outer alignment is the problem of correctly and adequately specifying what we want an AI to do.
More specifically, inner alignment is the problem of ensuring that any [mesa-optimizer](https://www.alignmentforum.org/tag/mesa-optimization) (i.e. a trained machine learning system which is itself an optimizer) is aligned with the objective function of the training process.
The term was first defined in the Hubinger et al. paper [Risk from Learned Optimization](https://arxiv.org/abs/1906.01820):
*> We refer to this problem of aligning mesa-optimizers with the base objective as the inner alignment problem. This is distinct from the outer alignment problem, which is the traditional problem of ensuring that the base objective captures the intended goal of the programmers.*
You can have both inner and outer alignment failures together. It is not a dichotomy and [often even experienced alignment researchers are unable to tell them apart](https://www.alignmentforum.org/posts/JKwrDwsaRiSxTv9ur/categorizing-failures-as-outer-or-inner-misalignment-is). Ideally, we don't think of a dichotomy of inner and outer alignment that can be tackled individually but of a more holistic alignment picture that includes the interplay between both inner and outer alignment approaches.
As an analogy: natural selection is an optimization force that 'designed' optimizers (e.g. humans) to achieve its goals. However, humans no longer primarily maximize reproductive success; they instead use birth control while still attaining the pleasure that evolution ‘meant’ as a reward for attempts at reproduction. This is a failure of inner alignment.
To solve the inner alignment problem, some sub-problems that we would have to make progress on include [deceptive alignment](/?state=8EL6&question=What%20is%20deceptive%20alignment%3F), [distribution shifts](https://www.alignmentforum.org/tag/distributional-shifts), and [gradient hacking](https://www.lesswrong.com/tag/gradient-hacking).
[^kix.uzi4pwwm8bjh]: Because most current AIs/AI models are implemented as optimizers, i.e. using the stochastic gradient descent (SGD) optimization/search algorithm, the terms model/AI/optimizer are often used interchangeably.
|
542471c8-3510-48eb-beba-a4337b06449a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Critical Thinking in Medicine
This is a book review of Cognitive Errors and Diagnostic Mistakes.
though i haven't read the book, it seems very interesting and i think many others here will find it interesting.
Here's the description of the article:
> Cognitive Errors and Diagnostic Mistakes is a superb new guide to critical thinking in medicine written by Jonathan Howard. It explains how our psychological foibles regularly bias and betray us, leading to diagnostic mistakes. Learning critical thinking skills is essential but difficult. Every known cognitive error is illustrated with memorable patient stories.
|
20093512-70b2-44d7-8647-2413bb8d8c99
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Cultural accumulation
Crossposted from world spirit sock puppet.
When I think of humans being so smart due to ‘cultural accumulation’, I think of lots of tiny innovations in thought and technology being made by different people, and added to the interpersonal currents of culture that wash into each person’s brain, leaving a twenty year old in 2020 much better intellectually equipped than a 90 year old who spent their whole life thinking in 1200 AD.
This morning I was chatting to my boyfriend about whether a person who went back in time (let’s say a thousand years) would be able to gather more social power than they can now in their own time. Some folk we know were discussing the claim that some humans would have a shot at literally take over the world if sent back in time, and we found this implausible.
The most obvious differences between a 2020 person and a 1200 AD person, in 1200 AD, is that they have experience with incredible technological advances that the 1200 AD native doesn’t even know are possible. But a notable thing about a modern person is that they famously don’t know what a bicycle looks like, so the level of technology they might be able to actually rebuild on short notice in 1200 AD is probably not at the level of a nutcracker, and they probably already had those in 1200 AD.
How does 2020 have complicated technology, if most people don’t know how it works? One big part is specialization: across the world, quite a few people do know what bicycles look like. And more to the point, presumably some of them know in great detail what bicycle chains look like, and what they are made of, and what happens if you make them out of slightly different materials or in slightly different shapes, and how such things interact with the functioning of the bicycle.
But suppose the 2020 person who is sent back is a bicycle expert, and regularly builds their own at home. Can they introduce bikes to the world 600 years early? My tentative guess is yes, but not very ridable ones, because t
|
d9590f1c-0f33-4325-bd33-c545023a4047
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[SEQ RERUN] Where Recursive Justification Hits Bottom
Today's post, Where Recursive Justification Hits Bottom was originally published on 08 July 2008. A summary (taken from the LW wiki):
> Ultimately, when you reflect on how your mind operates, and consider questions like "why does occam's razor work?" and "why do I expect the future to be like the past?", you have no other option but to use your own mind. There is no way to jump to an ideal state of pure emptiness and evaluate these claims without using your existing mind.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Is Morality Given?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
|
adab88cf-ad30-4a75-aa72-8c5945d070af
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Discord Server Emoji: A Community Dialect
I — Background
Discord is a popular online chat platform, that started as a tool for video game players to communicate. It acted as a replacement for other preexisting tools like Teamspeak. However, in a relatively short time the combination of powerful moderation tools, easy startup time, and simplicity of use placed it as a popular tool for providing a live environment for many kinds of communities. As of 2017 the platform was receiving roughly 1.5 million new users each week. The in depth analysis of why Discord has seen such success as a home for online communities is an interesting one, but also outside the scope of this essay. Here we are specifically interested in the role of a later addition to Discord’s feature set, custom emojis.
Emojis have been a part of the digital landscape for several years, but rarely have they become part of the way people communicate to the degree found on Discord communities. In a quick test, I looked at 100 of the most recent messages on 3 different servers that I personally use, and an average of 23 either contained emojis or had reactions (a feature where users can “react” with an emote to a message). Part of this is likely due to the way Discord handles emoji through shortcodes, making it easy to place them while typing a message. However, I think that the rapid text format found in a online chat is particularly well suited for emoji. They allow users to convey emotions or tones without a large body of text, a necessary tool in a relatively fast flowing back and forth conversation.
However, all of this is background to the key point of interest, server emoji. Discord allows the moderators of a server to add custom emoji to the server that users can take advantage of. Additionally, users who pay for the premium version of the service can use emoji from one server on any other server. The combination of these two features leads us to some of the interesting dynamics of Discord communication.
II — The custom emoji and its ha
|
e9205a1d-9bc7-4109-9144-9540942301a2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
understanding bureaucracy
Successful organizations maximize the formula shown below:
Meaningful OutputResources Spent
* Meaningful output: meaningful output is any output aligned with the organization’s purpose. For example, Feed My Starving Children’s purpose is to deliver food to impoverished areas. A meaningful output for them is a successful food delivery.
* Resources spent: this is mainly just time and money.
As an organization, you have two levers to improve: increase meaningful output while using the same amount of resources, or maintain the same meaningful output while decreasing the amount of resources spent. When companies hire people, they’re hoping that their meaningful output will increase far more than the increase in cost, and when companies conduct layoffs, they’re hoping that their meaningful output will reduce much less than their resources.
Few things frustrate me more than bureaucratic organizations. They completely butcher the formula above. Within the mess of systems, committees, and processes, the production of meaningful output becomes impossible, and sustaining this complexity requires an immense amount of resources.
The worst part is that they’re extremely static and difficult to change. This is because:
1. Inertia - the size and complexity of bureaucracy create tremendous inertia behind the status quo. It takes an overwhelming amount of energy to both map it and change it in a positive way. Few people are willing to take the plunge, and fewer power brokers within the bureaucracy will support any change because it would threaten their position.
2. Illusory work - the effort required to survive within the bureaucracy leaves little room and energy to change it. Here’s an example from Google:
> Google has 175,000+ capable and well-compensated employees who get very little done quarter over quarter, year over year. Like mice, they are trapped in a maze of approvals, launch processes, legal reviews, performance reviews, exec reviews, documents, meetings, bug r
|
444f21e7-4d69-433c-b9e7-b15efa30a9d2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Best of Don’t Worry About the Vase
Epistemic Status: Welcome everyone!
In honor of being linked to by Marginal Revolution, here is what the rest of this blog has to offer.
This blog is part of the rationalist community. The general interest links below are fully general interest, and require no knowledge of or interest in rationality.
What is rationality? This post is one good answer. It is believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory, and using that map to achieve your values.
To me, a rationalist is someone who highly values, and invests in, this process and the art thereof, both in themselves and others.
This blog strives to embody that way of thinking. If you are interested in the way of thinking you saw in the guide, and want to see or explore more of it, this blog might be for you.
If you’re wondering why anyone would think this way, my best responses to that are Responses to Tyler Cowen on Rationality and Why Rationality?
If you’re really interested, you should try reading the sequences. You can get the Kindle version here.
The rest of this post organizes what this blog has produced over the years, starting with highlighting the best posts of general or economic interest.
If you’re interested in getting involved in the community, especially in New York City, leave a comment with information on how to reach you, preferably email.
Top 5 General Interest / For Marginal Revolution Readers:
Something Was Wrong
Against Facebook
The Thing and the Symbolic Representation of The Thing
On the Seattle Minimum Wage Study (part 1) [Part 2] [Part 3]
Play in Hard Mode
Next 5 General Interest:
On Cutting Wages
Play in Hard Mode [Play in Easy Mode]
In a world… of venture capital
Book Review: How Asia Works by Joe Studwell
Book Review: Weapons of Math Destruction
Against Facebook Sequence:
Against Facebook
Against Facebook: Comparison to Alternatives and Call to Action
Help Us Find Your Blog (and othe
|
b4c4e940-2354-4f4d-97ae-474f0e8d97d7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
LINK: Quora brainstorms strategies for containing AI risk
In case you haven't seen it yet, Quora hosted an interesting discussion of different strategies for containing / mitigating AI risk, boosted by a $500 prize for the best answer. It attracted sci-fi author David Brin, U. Michigan professor Igor Markov, and several people with PhDs in machine learning, neuroscience, or artificial intelligence. Most people from LessWrong will disagree with most of the answers, but I think the article is useful as a quick overview of the variety of opinions that ordinary smart people have about AI risk.
https://www.quora.com/What-constraints-to-AI-and-machine-learning-algorithms-are-needed-to-prevent-AI-from-becoming-a-dystopian-threat-to-humanity
|
c09b514c-0fc5-4785-98e8-f542c7886ea8
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Transforming Democracy: A Unique Funding Opportunity for US Federal Approval Voting
I'm excited to share a special opportunity to create a systemic impact: a statewide approval voting ballot initiative in Missouri. This would affect all elections throughout the state including federal and presidential. Approval voting favors consensus candidates and a more accurate representation of the public's support. This is critical if we want a government to behave in our interests on policies that concern our well-being.
The organization leading this charge is Show Me Integrity, where I'm currently doing a fellowship and assisting with fundraising efforts. Show Me Integrity has successfully passed a ballot initiative before, showing their ability to succeed on this kind of scale. They also successfully ran the ballot initiative for approval voting in St. Louis.
Why is this important?
Approval voting is a method that allows voters to select as many candidates as they want; still, most votes wins. Approval voting, an easy-to-implement system, can greatly improve our current plurality-based approach to electing Federal and state-level positions. If you’ve read my writing on this before, you’ve seen me make that case. And this is much more effective and lasting than putting money behind individual candidates. This opportunity may not come around again.
The Impact
This initiative is not just about changing the voting method; it's about transforming how we elect individuals to government office, from local to federal positions, in the 19th largest state of over 6 million people. This includes influencing presidential electoral votes. This is the first statewide ballot initiative for approval voting, making it a pioneering effort with potentially far-reaching implications.
The Ask
We are currently in the signature-gathering phase, a crucial step that requires initial funding. The cost for signature gathering is around $4M, with an additional $9M needed later for campaign execution. Yes, it's expensive, but the potential impact justifies the investment.
Abou
|
9429d6ca-c5ff-4995-bfbd-5d0a241e7314
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Hashmarks: Privacy-Preserving Benchmarks for High-Stakes AI Evaluation
TL;DR: Quick "idea paper" describing a protocol for benchmarking AI capabilities in the open without disclosing sensitive information. It's similar to how passwords are used for user registration and authentication in a web app: experts first hash their answers, then developers hash the model's answers to check whether the hashes match, but at no point are the cleartext answers freely floating around. The paper explores the protocol's resilience against half a dozen failure modes, and speculates on future infrastructure for high-stakes evaluation.
Extended summary as Twitter thread.
Context: Below you can find the raw contents of the paper. While all feedback is welcome, identifying unseen failure modes would be particularly helpful ahead of potentially circulating a couple proof-of-concept instances of hashmarks in the coming weeks.
----------------------------------------
Introduction
Background & Motivation
Traditional question-answering (QA) benchmarks have played a crucial role in shaping the trajectory of AI development. They provide standardized metrics that facilitate fair comparisons across research groups and measure progress within the field in a reproducible way. For instance, capabilities related to mathematics can generally be gauged by assessing model performance in producing or selecting correct answers for exam questions related to mathematics. Similar "AI exams" have been used to assess model performance on topics ranging from STEM to humanities. In fact, prior work has highlighted the possibility of framing the vast majority of established natural language processing tasks as question-answering tasks.
Traditional QA benchmarks are typically sourced from crowd-workers, members of the group developing the benchmark, or a mix of the two. They typically contain a large number of data points representing individual exercises. Each data point, in turn, contains one question and one or several correct answers. Occasionally, a data point may also
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.