Understanding AI alignment research: A
Systematic Analysis
Jan H. Kirchner [∗] Logan Smith [∗]
kirchner.jan@icloud.com logansmith5@gmail.com
Jan H. Kirchner [∗]
Logan Smith [∗] Jacques Thibodeau [∗]
logansmith5@gmail.com thibo.jacques@gmail.com
thibo.jacques@gmail.com
Kyle McDonell Laria Reynolds
kyle@conjecture.dev laria@conjecture.dev
Kyle McDonell
laria@conjecture.dev
Abstract
AI alignment research is the field of study dedicated to ensuring that artificial intelligence
(AI) benefits humans. As machine intelligence gets more advanced, this research is
becoming increasingly important. Researchers in the field share ideas across different
media to speed up the exchange of information. However, this focus on speed means that
the research landscape is opaque, making it difficult for young researchers to enter the
field. In this project, we collected and analyzed existing AI alignment research. We found
that the field is growing quickly, with several subfields emerging in parallel. We looked
at the subfields and identified the prominent researchers, recurring topics, and different
modes of communication in each. Furthermore, we found that a classifier trained on
AI alignment research articles can detect relevant articles that we did not originally
include in the dataset. We are sharing the dataset with the research community and
hope to develop tools in the future that will help both established researchers and young
researchers get more involved in the field.
Introduction
AI alignment research is a nascent field of
research concerned with developing machine intelligence in ways that achieve desirable outcomes and avoid adverse outcomes [1,2] . While the term alignment prob-
lem was originally proposed to denote the
problem of "pointing an AI in a direction" [3],
the term AI alignment research is now used
as an overarching term referring to the entire research field associated with this problem [2,4–9] . Associated lines of research include the question of how to infer human
values as revealed by preferences [10], how to
prevent risks from learned optimization [11],
or how to set up an appropriate structure
of governance to facilitate coordination [12] .
∗These authors contributed equally.
As machine intelligence becomes increasingly capable [13,14], AI alignment research
becomes increasingly important. There is a
risk that if machine intelligence is not carefully designed, it could have catastrophic
consequences for humanity [15–17] . For example, if machine intelligence is not designed to take human values into account,
it could make decisions that are harmful
to humans [15] . Alternatively, if machine intelligence is not designed to be transparent and understandable to humans, it could
make decisions that are opaque to humans
and difficult to understand or reverse [18] . As
machine intelligence rapidly becomes more
powerful [14], the stakes associated with the
AI alignment problem only grow. Consequently, the field receives considerable at
1
Understanding AI alignment research: A Systematic Analysis
tention from philanthropic organizations
searching to increase the speed and scope
of research [19,20] .
One interesting feature of AI alignment
research is how the researchers communicate: to increase the speed and bandwidth of information exchange, novel insights and ideas are exchanged across various media. Beyond the traditional research article published as a preprint or
conference article, a substantial portion of
AI alignment research is communicated on
a curated community forum: the Alignment Forum [21] . Other channels of communication include formal and informal
talks [22], semi-publicly shared manuscripts
and notes [17,23], and informal exchanges via
instant messaging [24] .
The strong focus on increased speed and
bandwidth of communication comes at the
cost of a diffuse research landscape, making it difficult for newcomers to orient
themselves [25,26] . These difficulties are exacerbated by the short time the field has
existed and the resulting lack of unifying paradigms [27,28] . Previous attempts to
catalog and classify existing AI alignment
research [29–32] do not include all relevant
sources, are not kept up-to-date, and do
not provide easy access to the data in a
machine-readable format. Given the potential importance of AI alignment research and the attempts to increase the size
of the field [19,20], the lack of a coherent
overview of the research landscape represents a major bottleneck.
In this project, we collected and cataloged
AI alignment research literature and analyzed the resulting dataset in an unbiased
2
way to identify major research directions.
We found that the field is growing rapidly,
with several subfields emerging naturally
over time. By analyzing the emerging subfields, we can identify the prominent researchers working in the subfield, recurring topics and questions specific to each
subfield, and different modes of communication dominating each subfield. Finally,
training a classifier to distinguish AI alignment research from more general AI research can automatically detect relevant articles published too recently to be included
in our dataset. We make our dataset and the
analysis publicly available to interested researchers to enable further analysis and facilitate orientation to the field.
Results
To capture the current state of AI alignment research, we collected research articles from various sources (Tab. 1). Beyond the full-length manuscript published
on arXiv ( 𝑁 = 707), we also included
shorter communications published on the
Alignment Forum ( 𝑁 = 2 , 138), blogs, and
personal websites ( 𝑁 = 1 , 326), publicly
available, full-length books ( 𝑁 = 23), a
popular AI alignment research newsletter
with summaries of articles ( 𝑁 = 420), fulllength manuscripts not published on arXiv
( 𝑁 = 372), transcripts of lectures and interviews ( 𝑁 = 494), and entries from public
wikis ( 𝑁 = 582). To establish a baseline for
our analysis, we also collected research articles from adjacent ( 𝑁 = 1 , 679) and unrelated ( 𝑁 = 1 , 000) areas of research, as
well as shorter communications published
on the LessWrong Forum ( 𝑁 = 28 , 259).
For details about our collection procedure,
Understanding AI alignment research: A Systematic Analysis
see the Methods section.
Rapid growth of AI alignment
research from 2012 to 2022 across
two platforms.
There was substantial heterogeneity in the
form and quality of articles in the dataset.
We decided to focus on articles published
on the Alignment Forum and as preprints
on the arXiv server (see Methods for arXiv
inclusion criteria). These sources contain
a large portion of the entire published AI
alignment research (Tab. 1) and are structured in a consistent form that allows automated analysis.
To quantify the field’s growth, we visualized the number of articles published on
either platform as a function of time. We
found a rapid increase from 2017 [3] to 2022
(present) from less than 20 articles per year
to over 400 (Fig. 1a). When calculating the
number of articles published per researcher,
we observed a long-tailed distribution with
most researchers publishing less than five
articles and some publishing more than 60
(Fig. 1b). Finally, when comparing the
number of researchers per article on the
Alignment Forum and the arXiv, we noticed that articles on the Alignment Forum
tend to be written by either just a single
author or by a small team of fewer than
five researchers (Fig. 1c; purple). In contrast, the distribution of authors on arXiv
articles is long-tailed and includes articles
with more than 60 authors [35–37] (Fig. 1c;
green). This asymmetry partially results
from the late introduction of the multi
3We note that the Alignment Forum was created
in 2018 [34] .
ple authors feature to the Alignment Forum [1], but might also reflect the Alignment Forum’s focus on speed of communication, which disincentivizes large collaborations [38] . Alternatively, the larger number
of authors on arXiv articles might also reflect inflation of (unjustified) authorship on
research articles [39,40] .
Thus, AI alignment research is a rapidly
growing field, driven by many researchers
contributing individual articles and a few
publishing prolifically.
Unsupervised decomposition of
AI alignment research into dis-
tinct clusters.
Given the collected AI alignment research
articles from the Alignment Forum and
arXiv, we were curious whether we could
use the text to understand the current state
of research. To this end, we used the Allen
SPECTER model [41] to compute a sentence
embedding, followed by a UMAP projection [42] to obtain a low-dimensional representation (Fig. 2a). While there is a tendency for articles from different sources to
occupy different regions of the embedding,
the transition between Alignment Forum
and arXiv is fluent (Fig. 2b). Interestingly,
when visualizing the publication date, we
noticed that the embedding captures part of
the temporal evolution of the field (Fig. 2c).
Due to the relative youth of the field,
there is no universally-accepted decomposition of AI alignment research into subfields [22,28,43] . To see if we can produce a
1The feature to add multiple authors didn’t become
available to all users until 2019, and many people may
still not be aware of how to do it.
3
Understanding AI alignment research: A Systematic Analysis
source domain # of articles
| Alignment Forum |
alignmentforum.org lesswrong.com |
2,138 28,252 |
| arXiv |
AI alignment research (level-0) AI research (level-1) arXiv.org/search/?query=quantum arXiv.org/list/cs.AI (filtered) |
707 1,679 1,000 4,621 |
| Books |
(available upon request) |
23 |
| Blogs |
aiimpacts.org aipulse.org aisafety.camp carado.moe cold-takes.com deepmindsafetyresearch.medium.com generative.ink gwern.net intelligence.org jsteinhardt.wordpress.com qualiacomputing.com vkrakovna.wordpress.com waitbutwhy.com yudkowsky.net |
227 23 8 59 111 10 17 7 479 39 278 43 2 23 |
| Newsletter |
rohinshah.com/alignment-newsletter/ summaries |
420 |
| Reports |
pdf-only articles distill.pub |
323 49 |
| Audio transcripts |
youtube.com playlist 1 & 2 Assorted transcripts interviews with AI researchers33 |
457 25 12 |
| Wikis |
arbital.com lesswrong.com (Concepts Portal) stampy.ai |
223 227 132 |
| Total: |
Total token count: 89,240,129 Total word count: 53,550,146 Total character count: 351,757,163 |
|
Table 1: Different sources of text included in the dataset alongside the number
of articles per source. Color of row indicates that data was analyzed as AI alignment
research articles (green) or baseline (gray), or that the articles were added to the dataset
as a result of the analysis in Fig. 4 (purple). Definition of level-0 and level-1 articles in
Fig. 4c. For details about our collection procedure see the Methods section.
4
Understanding AI alignment research: A Systematic Analysis
104
102
1
10 [4]
10 [2]
1
c
| Col1 |
S. Armstrong S. Garrabrant A. Demsk J. Wentworth P. Christiano S. Levine |
|
|
0 60 - 60
articles per researcher
researchers per article
400
200
0
2000 2010 2020
time (years)
Figure 1: Alignment research across a community forum and a preprint server. ( a )
Number of articles published as a function of time on the Alignment Forum (AF; purple)
and the arXiv preprint server (arXiv; green). ( b ) Histogram of the number of articles
per researcher published on either AF or arXiv. Inset shows names of six researchers
with more than 60 articles. Note the logarithmic y-axis. ( c ) Histogram of the number
of researchers per article on AF (purple) and arXiv (green). Note the logarithmic y-axis.
a
title
+
embedding
(768 dim.)
UMAP
embedding
(2 dim.)
Figure 2: Dimensionality reduction and unsupervised clustering of alignment
research. ( a ) Schematic of the embedding and dimensionality reduction. After
concatenating title and abstract of articles, we embed the resulting string with the
Allen SPECTER model [41], and then perform UMAP dimensionality reduction with
n_neighbors=250. ( b ) UMAP embedding of articles with color indicating the source
(AF, purple; arXiv, green). ( c ) UMAP embedding of articles with color indicating date
of publication. Arrows superimposed to indicate direction of temporal evolution. ( d )
UMAP embedding of articles with color indicating cluster membership as determined
with k-means (k=5). Inset shows sum of residuals as a function of clusters k, with an
arrow highlighting the chosen number of clusters.
5
Understanding AI alignment research: A Systematic Analysis
useful, unbiased decomposition of the research landscape, we applied k-means clustering to the SPECTER embedding to obtain five distinct clusters (see Methods for
details).
In summary, combined semantic embedding and dimensionality reduction produce
a compact visualization of AI alignment research.
Research dynamics vary across
the identified clusters.
Having identified five distinct research
clusters, we asked ourselves if we could
find natural descriptions of research topics
and prominent researchers. Therefore, we
inspected which researchers tend to publish the highest number of articles in each
cluster (Tab. 2). Even though the names
of researchers did not enter into the Allen
SPECTER sentence embedding (Fig. 2a),
we observed that different researchers tend
to dominate different research clusters. The
distribution of researchers across clusters
lead us to assign putative labels to the clusters (Fig. 3a):
cluster one : Agent alignment is concerned
with the problem of aligning agentic systems, i.e. those where an AI performs actions in an environment and is typically
trained via reinforcement learning.
cluster two : Alignment foundations is concerned with deconfusion research, i.e. the
task of establishing formal and robust conceptual foundations for current and future
AI alignment research.
cluster three : Tool alignment is concerned with the problem of aligning nonagentic (tool) systems, i.e. those where an
6
AI transforms a given input into an output. The current, prototypical example of
tool AIs is the "large language model" [35,44] .
cluster four : AI governance is concerned
with how humanity can best navigate the
transition to advanced AI systems. This
includes focusing on the political, economic, military, governance, and ethical
dimensions [12] .
cluster five : Value alignment is concerned
with understanding and extracting human
preferences and designing methods that
stop AI systems from acting against these
preferences.
To corroborate these putative labels, we
computed a word cloud representation of
the articles (Sup. Fig. 1). We found the recurring words specific to each cluster to be
in good agreement with the labels. We also
note that our labels are consistent with our
observation that alignment foundations research is the historical origin of AI alignment research (Fig. 2c, Fig. 3b,c). Furthermore, we observe that theoretical research
(alignment foundations, value alignment,
AI governance) tends to be published on
the Alignment Forum. In contrast, applied
research (agent alignment, tool alignment)
tends to be published on arXiv (Fig. 2b,
Fig. 3d). Finally, we note that in the
alignment foundations cluster, a few individual researchers tend to produce a disproportionate number of research articles
(Fig. 3e).
In combination, these arguments make us
hopeful that our unsupervised decomposition of AI alignment research mirrors relevant structures existing in the field. We
hope to leverage the decomposition to pro
Understanding AI alignment research: A Systematic Analysis
| cluster 1; 𝑁= 567 (agent alignment) |
cluster 2; 𝑁= 988 (alignment founda- tions) |
cluster 3; 𝑁= 593 (tool alignment) |
cluster 4; 𝑁= 383 (AI governance) |
cluster 5; 𝑁= 670 (value alignment) |
| S. Levine (55) |
S. Armstrong (154) |
J. Steinhardt (20) |
D. Kokotajlo (21) |
S. Armstrong (54) |
| P. Abbeel (34) |
S. Garrabrant (95) |
D. Hendrycks (17) |
A. Dafoe (19) |
S. Byrnes (32) |
| A. Dragan (29) |
A. Demski (94) |
E. Hubinger (14) |
G. Worley III (11) |
P. Christiano (29) |
| S. Russell (23) |
J. Wentworth (57) |
P. Christiano (13) |
J. Clarck (10) |
R. Ngo (25) |
| S. Armstrong (22) |
"Diffractor" (44) |
P. Kohli (11) |
S. Armstrong (9) |
R. Shah (25) |
Table 2: Researchers with the highest number of articles per cluster. Clusters as
𝑁
determined in Fig. 2, with number of articles per cluster . Number in brackets behind researcher name indicates number of articles published by that researcher. Note:
"Diffractor" is an undisclosed pseudonym.
a “alignment
1
alignment
forum arxiv
1
0
e
1
0
d
2.5k
“value
alignment” alignment”
0
2015 2018 2021
time (years)
0
2015 2018 2021
time (years)
Figure 3: Characteristics of research clusters corroborate potential usefulness of
decomposition. ( a ) UMAP embedding of articles with color indicating cluster membership as in Fig. 2d. Labels assigned to each cluster are putative descriptions of a common
research focus across articles in the cluster. ( b ) Number of articles published per year, colored by cluster membership. ( c ) Fraction of articles published by cluster membership as a
function of time. ( d ) Fraction of articles from AF or arXiv as a function of cluster membership. ( e ) GINI inequality coefficient of articles per researcher as a function of article
cluster membership.
7
Understanding AI alignment research: A Systematic Analysis
vide researchers structured access to the existing literature in future work.
Leveraging dataset to train an AI
alignment research classifier.
When quantifying the number of articles
across different sources, we noticed a dramatic drop-off in articles published on the
arXiv after 2019 (Fig. 1a). Especially in
contrast with the continued strong increase
in articles published on the Alignment Forum, we suspected that our data collection
might have missed some more recent, relevant work [1] .
To automatically detect articles published
more recently, we decided to train a logistic regression classifier on the semantic embeddings of arXiv articles. Besides
the AI alignment research articles already
included in our dataset ("arXiv level-0";
Fig. 4a green), we also collected all arXiv
articles cited by level-0 articles, which
were not level-0 articles themselves ("arXiv
level-1"; Fig. 4a blue). We trained the classifier on a training set (80%) to distinguish
level-0 from level-1 articles and evaluated
performance on a separate test set (20%).
The classifier achieved good performance
(AUC= 0 . 75; Fig. 4b inset), reliably rejecting level-1 articles and correctly identifying a large portion of level-0 articles
(Fig. 4b). To test whether the classifier robustly generalizes beyond AI research, we
tested it on 1000 recently published articles
on quantum physics and the Alignment Forum. We found that the classifier reliably
1In particular, for our dataset, we manually extended an existing collection of arXiv articles from
2020 [31], see Methods section for details.
8
rejects quantum physics and accepts Alignment Forum articles (Fig. 4b,d).
Most AI alignment research articles on the
arXiv are published in the cs.AI section.
Therefore we used the arXiv API [45] to collect all articles from that section (Fig. 4c).
When applying our classifier to the semantic embeddings of the cs.AI articles,
we observed a slightly bimodal distribution
with most articles receiving a score close
to 0%, and some articles receiving a score
close to 100% (Fig. 4d). Motivated by the
distribution of scores of Alignment Forum
articles and by individual inspection, we
chose a threshold at 75% and considered articles above that threshold as AI alignment
research-relevant and added them to our
dataset. As anticipated, we found that the
number of AI alignment-relevant arXiv articles increases as rapidly over time as the
articles published on the Alignment Forum
(Fig. 4e). Finally, to verify that the addition of AI alignment-relevant arXiv articles
does not affect our unsupervised decomposition, we repeated the UMAP dimensionality reduction on the updated dataset. We
found that cluster structure is not disrupted
(Fig. 4f ).
In conclusion, our analysis demonstrates
that semantic embedding can capture relevant characteristics of AI alignment research and that automatic filtering of new
publications might be feasible.
Discussion
The field of AI alignment research is growing quickly, with many researchers publishing articles on diverse topics. We
found that semantic embedding and di
Understanding AI alignment research: A Systematic Analysis
e 1500
train
test
clu. 1 clu. 2 clu. 3
clu. 4 clu. 5
0
2000 2010 2020
time (years)
b
0.6
0.2
0
0
f
0.3
0.1
| 1 d quantum physics |
Col2 |
Col3 |
0 1 FP rate 0 1 TP rate AUC 0.75 arxiv level-0 arxiv level-1
|
0 1 FP rate 0 1 TP rate AUC 0.75 arxiv level-0 arxiv level-1
|
0 1 FP rate 0 1 TP rate AUC 0.75 arxiv level-0 arxiv level-1
|
|
|
|
| Col1 |
Col2 |
cutoff |
Col4 |
|
alignment forum arxiv cs.AI
|
alignment forum arxiv cs.AI
|
alignment forum arxiv cs.AI
|
|
|
|
|
|
|
|
|
Figure 4: An AI alignment research classifier for filtering new publications. ( a )
Top: Illustration of arXiv level-0 articles (alignment research; green) and level-1 articles (cited by alignment research articles; blue). Bottom: Schematic of test-train split
(20%-80% for training of a logistic regression classifier. ( b ) Fraction of articles as a function
of classifier score for arXiv level-0 (green), level-1 (blue), and arXiv articles on quantum
physics (grey). ( c ) Illustration of procedure for filtering arXiv articles. After querying
articles from the cs.AI section of arXiv, the logistic regression classifier assigns a score
between 0 and 1. ( d ) Fraction of articles as a function of classifier score for articles from
the cs.AI section of arXiv (grey) and AF (purple). Dashed line indicates cutoff for classifying articles as arXiv level-0 (75%). ( e ) Number of articles published as a function of
time on AF (purple) and arXiv (green), according to the cutoff in panel d . ( f ) Left inset:
Original UMAP embedding from Fig. 2d. Right: UMAP embedding of all original articles and updated arXiv articles with color indicating cluster membership as in Fig. 2d
or that the article is filtered from the arXiv (gray).
9
Understanding AI alignment research: A Systematic Analysis
mensionality reduction can produce a compact visualization of AI alignment research.
This decomposition of AI alignment research mirrors known structures in the
field, demonstrating that semantic embedding can capture relevant characteristics of
AI alignment research. Furthermore, we
demonstrate the possible feasibility of automatically detecting new publications relevant to AI alignment research. In the future, we hope that our decomposition can
provide researchers with structured access
to the existing literature.
Tools for alignment researchers. Our
presented research suggests several exciting possible applications for improving
the research landscape in AI alignment
research. We have begun to explore
this potential by developing several prototypes that use the collected dataset to
interactively explore semantic embeddings
(Sup. Fig. 2), to provide summaries of long
articles (Sup. Fig. 3), or to search and compare articles (Sup. Fig. 4). Thanks to the
focus on speed and the openness to innovation of the AI alignment research community, we believe that tools tailored to
this community might reach broad adoption and help accelerate research efforts.
Paradigmatic AI alignment research. In
the language of Thomas Kuhn [27], the successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science. Some
researchers argue that AI alignment research is pre-paradigmatic, meaning that it
has not yet converged on a single, dominant paradigm or approach. While our
research demonstrates that decomposition
10
of AI alignment research into meaningful
subfields is possible, we note that the choice
of the number of subfields has a subjective
component (Fig. 2d). Furthermore, the semantic similarity between articles in a cluster does not imply similarity in methodology or underlying research agenda. However, we do not believe that this implies the
impossibility of progress. In fact, the current exploratory nature of AI alignment research might be a strength, as exploration
helps to avoid ossification.
Limitations. Especially due to the rapid
expansion of the field (Fig. 1), classifications and descriptions of the state-of-theart might become inaccurate soon after
publication. While the observation that
our clustering remains stable after including many articles not used for the original
clustering (Fig. 4) makes us hopeful, we still
plan to carefully monitor the field and publish regular updates to our analysis.
The decision to focus on the two largest,
non-redundant sources of articles (Alignment Forum and arXiv) might systematically exclude certain lines of research and
thus bias our analysis. However, as a substantial fraction of blog posts, reports and
the alignment newsletter tend to be crossposted or announced on the Alignment Forum we think a strong bias is unlikely.
In summary, by collecting a comprehensive dataset of published AI alignment research literature, we demonstrate rapid
growth of the field over the last five years
and identify emerging directions of research through unbiased clustering.
Understanding AI alignment research: A Systematic Analysis
Methods
Data collection and inclusion criteria.
Alignment Forum & LessWrong: We
extracted all posts on the forum viewer
website GreaterWrong.com on March
21st, 2022 (dataset used for the analysis
in this article) and June 4th (dataset published). We excluded articles with the tag
"event", which are published for coordinating meetups.
arXiv: We extended an existing collection of AI alignment research arXiv articles [31] from 2020 with relevant publications published since then ("arXiv Level0"). We started with an existing bibliography of alignment literature [31] and augmented that collection with two other
bibliographies [46,47], articles mentioned in
the alignment newsletter, and articles we
identified. We excluded articles that were
not about AI alignment research.
Books: We converted ebooks into plain
text files with pandoc. No text was excluded.
Blogs: We extracted individual articles
from AI alignment research-relevant (as
determined by the authors) blogs with the
requests and the BeautifulSoup packages. No text was excluded.
Newsletter: We extracted summaries
from the publicly available list of summaries and matched them with the respective original articles.
Reports: We extracted additional published articles that were only available as
pdf files, by converting these files with
grobid and cleaning the resulting files.
No text was excluded.
Audio transcripts: We were able to locate some transcripts of interviews available online. For the rest, we used a
voice-to-text service (otter.ai) to extract
transcripts from AI alignment researchrelevant (as determined by the authors)
recordings. We hired contractors to clean
the resulting transcripts to correct formatting problems and spelling mistakes. After
cleaning, no text was excluded.
Wikis: We extracted articles from two
open Wikis on AI alignment research
(arbital.com, (lesswrong.org’s Concepts Portal and stampy.ai) through the
export option on the website.
Data analysis. We performed the dataset
collection with Python 3.7 on commodity
hardware and Google Colab and all data
analysis with Python 3.7 in Google Colab.
We created plots with the seaborn package [48] and post-processed them in Adobe Illustrator.
Semantic embedding. We used the Allen
SPECTER model [41] through the huggingface sentence transformer library [49] for embedding articles into a 768 dimensional
vector space. The SPECTER model requires each article as
+ <SEP> +
<Abstract>, where <SEP> is the separator
token of the tokenizer. For articles from
the arXiv, we used the author-submitted
abstract as the <Abstract>. As articles from
the Alignment Forum do not always have
an author-submitted abstract, we instead
used the first 2-5 paragraphs of the article
as the <Abstract>.</p>
<p><strong>Dimensionality reduction.</strong> To compute
a two-dimensional representation of the semantic embedding, we used the python</p>
<p>11</p>
<p>Understanding AI alignment research: A Systematic Analysis</p>
<p>UMAP package [50] with a neighborhood
parameter of n_neighborhood=250. Using
a smaller or larger neighborhood did not
affect the results, but at very small neighborhood values (n_neighborhood<40) the
embedding became unstable.</p>
<p><strong>Unsupervised clustering.</strong> While we explored different clustering algorithms, we
eventually converged on the k-means implementation of the scikit-learn package [51],
which is straightforward to interpret while
producing robust clustering across multiple
instantiations.</p>
<p><strong>Statistics.</strong> All statistics were computed
with the seaborn package [48] in python, with
the exception of the GINI coefficient in
Fig. 3, which we computed as half of the
relative mean absolute difference [52],</p>
<p><em>𝑛</em>
∑︁</p>
<p><em>𝑖</em> =1</p>
<p><em>𝑛</em>
∑︁</p>
<p><em>𝑗</em> =1</p>
<p>�� <em>𝑥𝑖</em> - <em>𝑥</em> <em>𝑗</em> ��</p>
<p><em>,</em>
<em>𝑛</em> [2] <em>𝑥</em>
2 ¯</p>
<p>where <em>𝑥𝑖</em> is the number of articles of each</p>
<p><em>𝑥</em>
researcher and ¯ is the average number of
articles across all researchers.</p>
<p><strong>Logistic regression classifier.</strong> To train
the AI alignment research classifier, we
used the LogisticRegression model of the
scikit-learn package in Python [51] with an
increased number of maximum iterations,
max_iter=1000. For training, we used 80%
of level-0 and level-1 arXiv papers. For
evaluation in Fig. 4b we used the remaining 20% of level-0 and level-1 arXiv papers as well as 1000 arbitrarily chosen articles on quantum physics. For the analysis in Fig. 4c-f, we used the arXiv API [45]</p>
<p>to collect all articles published in the cs.AI</p>
<p>12</p>
<p>section since its inception.
<strong>Code</strong> <strong>and</strong> <strong>data</strong> <strong>availability.</strong> The
dataset and all code for collecting
the dataset is available on Github,
<a href="https://github.com/moirage/alignment-research-dataset.git" rel="nofollow">https://github.com/moirage/alignment-</a>
<a href="https://github.com/moirage/alignment-research-dataset.git" rel="nofollow">research-dataset.git.</a> Code for the data
analysis is available upon request.</p>
<p><strong>Acknowledgments</strong>
JK and LR were supported by funding
from the Longterm Future Fund. We
thank Daniel Clothiaux for help with writing the code and extracting articles. We
thank Remmelt Ellen, Adam Shimi, and
Arush Tagade for feedback on the research.
We thank Chu Chen, Ömer Faruk Şen,
Hey, Nihal Mohan Moodbidri, and Trinity
Smith for cleaning the audio transcripts.</p>
<p><strong>References</strong></p>
<ol>
<li><p>Yudkowsky, E. The AI alignment
problem: why it is hard, and where
to start. <em>Symbolic Systems Distinguished</em>
<em>Speaker</em> (2016).</p>
</li>
<li><p>Christian, B. <em>The alignment problem:</em>
<em>Machine learning and human values</em>
(WW Norton & Company, 2020).</p>
</li>
<li><p>Yudkowsky, E. <em>The Rocket Alignment</em>
<em>Problem</em> en-US. 2018.</p>
</li>
<li><p>Russell, S. in <em>Human-Like Machine</em>
<em>Intelligence</em> 3–23 (Oxford University
Press Oxford, 2021).</p>
</li>
<li><p>Gabriel, I. Artificial intelligence, values, and alignment. <em>Minds and ma-</em>
<em>chines</em> <strong>30,</strong> 411–437 (2020).</p>
</li>
</ol>
<p>Understanding AI alignment research: A Systematic Analysis</p>
<ol start="6">
<li><p>Ouyang, L., Wu, J., Jiang, X.,
Almeida, D., Wainwright, C. L.,
Mishkin, P., Zhang, C., Agarwal, S.,
Slama, K., Ray, A., <em>et al.</em> Training language models to follow instructions
with human feedback. <em>arXiv preprint</em>
<em>arXiv:2203.02155</em> (2022).</p>
</li>
<li><p>Kenton, Z., Everitt, T., Weidinger, L.,
Gabriel, I., Mikulik, V. & Irving, G.
Alignment of language agents. <em>arXiv</em>
<em>preprint arXiv:2103.14659</em> (2021).</p>
</li>
<li><p>Dafoe, A., Bachrach, Y., Hadfield, G.,
Horvitz, E., Larson, K. & Graepel, T.
<em>Cooperative AI: machines must learn to</em>
<em>find common ground</em> 2021.</p>
</li>
<li><p>Askell, A., Bai, Y., Chen, A., Drain,
D., Ganguli, D., Henighan, T., Jones,
A., Joseph, N., Mann, B., DasSarma,
N., <em>et al.</em> A General Language Assistant as a Laboratory for Alignment. <em>arXiv preprint arXiv:2112.00861</em>
(2021).</p>
</li>
<li><p>Christiano, P. F., Leike, J., Brown,
T., Martic, M., Legg, S. & Amodei,
D. Deep reinforcement learning from
human preferences. <em>Advances in neural</em>
<em>information processing systems</em> <strong>30</strong> (2017).</p>
</li>
<li><p>Hubinger, E., van Merwijk, C.,
Mikulik, V., Skalse, J. & Garrabrant,
S. Risks from learned optimization
in advanced machine learning systems. <em>arXiv preprint arXiv:1906.01820</em>
(2019).</p>
</li>
<li><p>Dafoe, A. AI governance: a research
agenda. <em>Governance of AI Program, Fu-</em>
<em>ture of Humanity Institute, University</em>
<em>of Oxford: Oxford, UK</em> <strong>1442,</strong> 1443
(2018).</p>
</li>
<li><p>Grace, K., Salvatier, J., Dafoe, A.,
Zhang, B. & Evans, O. When will
AI exceed human performance? Evidence from AI experts. <em>Journal of Arti-</em>
<em>ficial Intelligence Research</em> <strong>62,</strong> 729–754
(2018).</p>
</li>
<li><p>Sevilla, J., Heim, L., Ho, A., Besiroglu, T., Hobbhahn, M. & Villalobos, P. Compute trends across
three eras of machine learning. <em>arXiv</em>
<em>preprint arXiv:2202.05924</em> (2022).</p>
</li>
<li><p>Bostrom, N. <em>Superintelligence</em> (Dunod,
2017).</p>
</li>
<li><p>Ord, T. <em>The precipice: Existential risk</em>
<em>and the future of humanity</em> (Hachette
Books, 2020).</p>
</li>
<li><p>Carlsmith, J. Is Power-Seeking AI an
Existential Risk? (2021).</p>
</li>
<li><p>Christiano, P. What failure looks like.
<em>Alignment Forum</em> (2019).</p>
</li>
<li><p>Beckstead, N. & Muehlhauser, L.
<em>Potential Risks from Advanced Arti-</em>
<em>ficial Intelligence</em> <a href="https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence" rel="nofollow">https : / / www .</a>
<a href="https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence" rel="nofollow">openphilanthropy . org / focus /</a></p>
</li>
</ol>
<p><a href="https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence" rel="nofollow">global - catastrophic - risks /</a></p>
<p><a href="https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence" rel="nofollow">potential - risks - advanced -</a>
<a href="https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence" rel="nofollow">artificial-intelligence (2022).</a></p>
<ol start="20">
<li><p>Foundation, F. <em>Potential Risks from Ad-</em>
<em>vanced Artificial Intelligence</em> <a href="https://ftxfuturefund.org/" rel="nofollow">https://</a>
<a href="https://ftxfuturefund.org/" rel="nofollow">ftxfuturefund.org/ (2022).</a></p>
</li>
<li><p>Infrastructure, L. <em>Alignment Forum</em>
<a href="https://www.alignmentforum.org/" rel="nofollow">https://www.alignmentforum.org/</a>
(2022).</p>
</li>
<li><p>Christiano, P. <em>Current</em> <em>work</em> <em>in</em>
<em>AI alignment</em> <a href="https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment" rel="nofollow">https : / / forum .</a>
<a href="https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment" rel="nofollow">effectivealtruism . org / posts /</a></p>
</li>
</ol>
<p><a href="https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment" rel="nofollow">63stBTw3WAW6k45dY</a> / paul </p>
<p>13</p>
<p>Understanding AI alignment research: A Systematic Analysis</p>
<p><a href="https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment" rel="nofollow">christiano - current - work - in -</a>
<a href="https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment" rel="nofollow">ai-alignment (2022).</a></p>
<ol start="23">
<li>Cotra, A. <em>Draft report on AI timelines</em>
<a href="https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines" rel="nofollow">https : / / www . lesswrong . com /</a></li>
</ol>
<p><a href="https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines" rel="nofollow">posts / KrJfoZzpSDpnrv9va / draft -</a>
<a href="https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines" rel="nofollow">report-on-ai-timelines (2022).</a></p>
<ol start="24">
<li><p>Institute, M. I. R. <em>Late</em> <em>2021</em>
<em>MIRI</em> <em>Conversations</em> <a href="https://intelligence.org/late-2021-miri-conversations/" rel="nofollow">https : / /</a>
<a href="https://intelligence.org/late-2021-miri-conversations/" rel="nofollow">intelligence . org / late - 2021 -</a>
<a href="https://intelligence.org/late-2021-miri-conversations/" rel="nofollow">miri-conversations/ (2022).</a></p>
</li>
<li><p>Hyvärinen, A.-M. How I failed to
form views on AI safety. <em>Effective Al-</em>
<em>truism Forum</em> (2022).</p>
</li>
<li><p>Wentworth, J. S. How To Get Into
Independent Research On Alignment/Agency. <em>Alignment</em> <em>Forum</em>
(2021).</p>
</li>
<li><p>Kuhn, T. S. <em>The structure of scien-</em>
<em>tific revolutions</em> (Chicago University of
Chicago Press, 1970).</p>
</li>
<li><p>Shimi, A. Epistemological Framing
for AI Alignment Research. <em>Alignment</em>
<em>Forum</em> (2021).</p>
</li>
<li><p>Miles, R. <em>Stampy’s Wiki</em> <a href="https://stampy.ai/wiki/Stampy%5C%27s_Wiki" rel="nofollow">https : / /</a>
<a href="https://stampy.ai/wiki/Stampy%5C%27s_Wiki" rel="nofollow">stampy.ai/wiki/Stampy%5C%27s_</a>
<a href="https://stampy.ai/wiki/Stampy%5C%27s_Wiki" rel="nofollow">Wiki (2022).</a></p>
</li>
<li><p>Shah, R. <em>Alignment Newsletter</em> <a href="https://rohinshah.com/alignment-newsletter/" rel="nofollow">https:</a>
<a href="https://rohinshah.com/alignment-newsletter/" rel="nofollow">/ / rohinshah . com / alignment -</a>
<a href="https://rohinshah.com/alignment-newsletter/" rel="nofollow">newsletter/ (2022).</a></p>
</li>
<li><p>Riedel, J. & Deibel, A. <em>AI Safety</em>
<em>Papers</em> <a href="https://ai-safety-papers.quantifieduncertainty.org/" rel="nofollow">https : / / ai - safety -</a>
<a href="https://ai-safety-papers.quantifieduncertainty.org/" rel="nofollow">papers . quantifieduncertainty .</a>
<a href="https://ai-safety-papers.quantifieduncertainty.org/" rel="nofollow">org/ (2022).</a></p>
</li>
<li><p>Ought. <em>Elicit: The AI research assistant</em>
<a href="https://elicit.org" rel="nofollow">https://elicit.org (2022).</a></p>
</li>
</ol>
<p>14</p>
<ol start="33">
<li>Gates, V. <em>Transcripts</em> <em>of</em> <em>inter-</em>
<em>views</em> <em>with</em> <em>AI</em> <em>researchers</em> <a href="https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers" rel="nofollow">https :</a>
<a href="https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers" rel="nofollow">/ / www . lesswrong . com / posts /</a></li>
</ol>
<p><a href="https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers" rel="nofollow">LfHWhcfK92qh2nwku / transcripts -</a></p>
<p><a href="https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers" rel="nofollow">of - interviews - with - ai -</a>
<a href="https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers" rel="nofollow">researchers (2022).</a></p>
<ol start="34">
<li>Arnold, R. <em>Announcing</em> <em>Align-</em>
<em>mentForum.org</em> <em>Beta</em> <a href="https://www.lesswrong.com/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta" rel="nofollow">https : / /</a>
<a href="https://www.lesswrong.com/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta" rel="nofollow">www . lesswrong . com / posts /</a></li>
</ol>
<p><a href="https://www.lesswrong.com/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta" rel="nofollow">JiMAMNAb55Qq24nES / announcing -</a>
<a href="https://www.lesswrong.com/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta" rel="nofollow">alignmentforum-org-beta (2022).</a></p>
<ol start="35">
<li><p>Bommasani, R., Hudson, D. A., Adeli,
E., Altman, R., Arora, S., von Arx, S.,
Bernstein, M. S., Bohg, J., Bosselut,
A., Brunskill, E., <em>et al.</em> On the opportunities and risks of foundation models. <em>arXiv preprint arXiv:2108.07258</em>
(2021).</p>
</li>
<li><p>Brundage, M., Avin, S., Wang, J.,
Belfield, H., Krueger, G., Hadfield,
G., Khlaaf, H., Yang, J., Toner, H.,
Fong, R., <em>et al.</em> Toward trustworthy
AI development: mechanisms for supporting verifiable claims. <em>arXiv preprint</em>
<em>arXiv:2004.07213</em> (2020).</p>
</li>
<li><p>Chen, M., Tworek, J., Jun, H., Yuan,
Q., Pinto, H. P. d. O., Kaplan,
J., Edwards, H., Burda, Y., Joseph,
N., Brockman, G., <em>et al.</em> Evaluating large language models trained on
code. <em>arXiv preprint arXiv:2107.03374</em>
(2021).</p>
</li>
<li><p>Moshontz, H., Ebersole, C. R., Weston, S. J. & Klein, R. A. A guide for
many authors: Writing manuscripts in
large collaborations. <em>Social and Person-</em>
<em>ality Psychology Compass</em> <strong>15,</strong> e12590
(2021).</p>
</li>
</ol>
<p>Understanding AI alignment research: A Systematic Analysis</p>
<ol start="39">
<li><p>Põder, E. Let’s correct that small mistake. <em>Journal of the American Society for</em>
<em>Information Science and Technology</em> <strong>61,</strong>
2593–2594 (2010).</p>
</li>
<li><p>Lozano, G. A. The elephant in the
room: multi-authorship and the assessment of individual researchers.
<em>Current Science</em> <strong>105,</strong> 443–445 (2013).</p>
</li>
<li><p>Cohan, A., Feldman, S., Beltagy,
I., Downey, D. & Weld, D. S.
Specter: Document-level representation learning using citationinformed transformers. <em>arXiv preprint</em>
<em>arXiv:2004.07180</em> (2020).</p>
</li>
<li><p>McInnes, L., Healy, J. & Melville,
J. Umap: Uniform manifold approximation and projection for
dimension reduction. <em>arXiv preprint</em>
<em>arXiv:1802.03426</em> (2018).</p>
</li>
<li><p>Critch, A. <em>Some AI research areas</em>
<em>and their relevance to existential safety</em>
<a href="https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1" rel="nofollow">https : / / www . alignmentforum .</a></p>
</li>
</ol>
<p><a href="https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1" rel="nofollow">org / posts / hvGoYXi2kgnS3vxqb /</a></p>
<p><a href="https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1" rel="nofollow">some - ai - research - areas - and -</a></p>
<p><a href="https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1" rel="nofollow">their-relevance-to-existential-</a>
<a href="https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1" rel="nofollow">1 (2022).</a></p>
<ol start="44">
<li><p>Weidinger, L., Mellor, J., Rauh, M.,
Griffin, C., Uesato, J., Huang, P.-S.,
Cheng, M., Glaese, M., Balle, B.,
Kasirzadeh, A., <em>et al.</em> Ethical and social
risks of harm from Language Models. <em>arXiv preprint arXiv:2112.04359</em>
(2021).</p>
</li>
<li><p>University, C. <em>arXiv API</em> <a href="https://arxiv.org/help/api/" rel="nofollow">https : / /</a>
<a href="https://arxiv.org/help/api/" rel="nofollow">arxiv.org/help/api/ (2022).</a></p>
</li>
<li><p>Krakovna, V. <em>AI</em> <em>safety</em> <em>resources</em>
<a href="https://vkrakovna.wordpress.com/ai-safety-resources/" rel="nofollow">https : / / vkrakovna . wordpress .</a>
<a href="https://vkrakovna.wordpress.com/ai-safety-resources/" rel="nofollow">com/ai-safety-resources/ (2022).</a></p>
</li>
<li><p>Larks. <em>2021 AI Alignment Literature</em>
<em>Review and Charity Comparison</em> <a href="https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison" rel="nofollow">https:</a>
<a href="https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison" rel="nofollow">/ / www . alignmentforum . org /</a></p>
</li>
</ol>
<p><a href="https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison" rel="nofollow">posts / C4tR3BEpuWviT7Sje / 2021 -</a></p>
<p><a href="https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison" rel="nofollow">ai-alignment-literature-review-</a>
<a href="https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison" rel="nofollow">and-charity-comparison (2022).</a></p>
<ol start="48">
<li><p>Waskom, M. L. seaborn: statistical
data visualization. <em>Journal of Open</em>
<em>Source Software</em> <strong>6,</strong> 3021 (2021).</p>
</li>
<li><p>Reimers, N. & Gurevych, I. <em>Sentence-</em>
<em>BERT:</em> <em>Sentence</em> <em>Embeddings</em> <em>using</em>
<em>Siamese BERT-Networks</em> in <em>Proceedings</em>
<em>of the 2019 Conference on Empirical</em>
<em>Methods in Natural Language Process-</em>
<em>ing</em> (Association for Computational
Linguistics, 2019).</p>
</li>
<li><p>Sainburg, T., McInnes, L. & Gentner,
T. Q. Parametric UMAP Embeddings for Representation and Semisupervised Learning. <em>Neural Computa-</em>
<em>tion</em> <strong>33,</strong> 2881–2907 (2021).</p>
</li>
<li><p>Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B.,
Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V.,
Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M. &
Duchesnay, E. Scikit-learn: Machine
Learning in Python. <em>Journal of Ma-</em>
<em>chine Learning Research</em> <strong>12,</strong> 2825–2830
(2011).</p>
</li>
<li><p>Wikipedia contributors. <em>Gini coeffi-</em>
<em>cient — Wikipedia, The Free Encyclope-</em>
<em>dia</em> [Online; accessed 27-May-2022].</p>
</li>
<li></li>
<li><p>Wang, B. <em>Mesh-Transformer-JAX:</em>
<em>Model-Parallel</em> <em>Implementation</em> <em>of</em>
<em>Transformer Language Model with JAX</em></p>
</li>
</ol>
<p>15</p>
<p>Understanding AI alignment research: A Systematic Analysis</p>
<p><a href="https://github.com/kingoflolz/mesh-transformer-jax" rel="nofollow">https://github.com/kingoflolz/</a>
<a href="https://github.com/kingoflolz/mesh-transformer-jax" rel="nofollow">mesh-transformer-jax. 2021.</a></p>
<ol start="54">
<li>Wang, B. & Komatsuzaki, A. <em>GPT-</em>
<em>J-6B: A 6 Billion Parameter Autore-</em>
<em>gressive Language Model</em> <a href="https://github.com/kingoflolz/mesh-transformer-jax" rel="nofollow">https : / /</a>
<a href="https://github.com/kingoflolz/mesh-transformer-jax" rel="nofollow">github . com / kingoflolz / mesh -</a>
<a href="https://github.com/kingoflolz/mesh-transformer-jax" rel="nofollow">transformer-jax. 2021.</a></li>
</ol>
<p>16</p>
<p>Understanding AI alignment research: A Systematic Analysis</p>
<p><strong>Appendix</strong></p>
<p><strong>cluster 1 - reinforcement learning</strong> <strong>cluster 2 - agent foundations</strong></p>
<p><strong>cluster 3 - language modeling</strong></p>
<p>variable</p>
<p>feature</p>
<p>particular</p>
<p>parameter</p>
<p>train</p>
<p>better</p>
<p>take</p>
<p>space</p>
<p>demonstration</p>
<p>use</p>
<p>part</p>
<p>exploration</p>
<p>allow</p>
<p>utility</p>
<p>environment</p>
<p>better</p>
<p>trajectories</p>
<p>planning</p>
<p>set</p>
<p>prediction</p>
<p>question</p>
<p>must</p>
<p>parameter trainparticular better take feature</p>
<p>approach</p>
<p>neural networktrained robustness welloutput input</p>
<p>approach</p>
<p>want</p>
<p>term</p>
<p>game</p>
<p>well</p>
<p>many</p>
<p>used</p>
<p>observation</p>
<p>number</p>
<p>player</p>
<p>fact</p>
<p>safety</p>
<p>able</p>
<p>input</p>
<p>often</p>
<p>case</p>
<p>wellwant</p>
<p>still</p>
<p>output</p>
<p>going</p>
<p>define</p>
<p>robustness</p>
<p>problems</p>
<p>want</p>
<p>environment utility</p>
<p>number player</p>
<p>makealgorithm</p>
<p>lead</p>
<p>trained</p>
<p>exist</p>
<p>task</p>
<p>rather</p>
<p>new</p>
<p>good</p>
<p>language</p>
<p>system</p>
<p>learn</p>
<p>must</p>
<p>methoduser rather</p>
<p>user</p>
<p>policyeven idea</p>
<p>goal</p>
<p>algorithm</p>
<p>first settingeven see trained</p>
<p>observation</p>
<p>may</p>
<p>timerewardfirstshow</p>
<p>setting</p>
<p>idea</p>
<p>see</p>
<p>consider</p>
<p>many</p>
<p>compute</p>
<p>question</p>
<p>classifier</p>
<p>number player</p>
<p>algorithm</p>
<p>want</p>
<p>definitionpreferencesetoutcome possible</p>
<p>something</p>
<p>second</p>
<p>without</p>
<p>even</p>
<p>output</p>
<p>usegood</p>
<p>good</p>
<p>every</p>
<p>actually</p>
<p>assume</p>
<p>game</p>
<p>author distribution</p>
<p>might function much networkused result see</p>
<p>learn robustness stillwellwant</p>
<p>researchdatasetlanguagemay questionclassifier good</p>
<p>performanceauthor distribution</p>
<p>research</p>
<p>mean</p>
<p>particular</p>
<p>time</p>
<p>show</p>
<p>whether</p>
<p>real world</p>
<p>value</p>
<p>author</p>
<p>much</p>
<p>way</p>
<p>data</p>
<p>preferenceoutcome possible</p>
<p>start</p>
<p>better</p>
<p>find general</p>
<p>distribution</p>
<p>mayable</p>
<p>evaluate</p>
<p>simple</p>
<p>likely</p>
<p>case</p>
<p>learning</p>
<p>possible</p>
<p>require</p>
<p>environmentexpert whether real world</p>
<p>imitation learning</p>
<p>outcome</p>
<p>function</p>
<p>much</p>
<p>sentence</p>
<p>state expertreward function</p>
<p>different</p>
<p>doesn</p>
<p>used</p>
<p>algorithmnew</p>
<p>result</p>
<p>see</p>
<p>answer</p>
<p>theory</p>
<p>take</p>
<p>expert</p>
<p>definitioninformationoutcome considerpossibledoesn</p>
<p>valuechangegood bit Thus output much</p>
<p>definitioninformation consider</p>
<p>information</p>
<p>changegood</p>
<p>need</p>
<p>based</p>
<p>function much used</p>
<p>trainingmachine learningtwowaywordlearning</p>
<p>imageseem</p>
<p>humans make tasksamplefirst</p>
<p>distribution</p>
<p>used</p>
<p>give</p>
<p>least</p>
<p>wayword</p>
<p>bit Thus output much</p>
<p>show</p>
<p>utility function</p>
<p>time</p>
<p>point</p>
<p>first</p>
<p>robot two</p>
<p>performance</p>
<p>robot</p>
<p>result</p>
<p>machine learning</p>
<p>single</p>
<p>train</p>
<p>functionlearninggive</p>
<p>function</p>
<p>learned</p>
<p>objective</p>
<p>whether</p>
<p>don</p>
<p>idea</p>
<p>chooseseem</p>
<p>reward</p>
<p>well</p>
<p>whether</p>
<p>don</p>
<p>Thus</p>
<p>bit</p>
<p>output</p>
<p>much</p>
<p>image</p>
<p>two</p>
<p>accuracy</p>
<p>agents</p>
<p>show</p>
<p>humans</p>
<p>give</p>
<p>particular</p>
<p>level</p>
<p>instead</p>
<p>approach</p>
<p>behavior</p>
<p>may</p>
<p>seem</p>
<p>Lettrue</p>
<p>true</p>
<p>might</p>
<p>need</p>
<p>often</p>
<p>utility function</p>
<p>belief</p>
<p>simplepossible</p>
<p>findtext</p>
<p>possible</p>
<p>policies</p>
<p>learning algorithm</p>
<p>humans</p>
<p>sample</p>
<p>framework</p>
<p>say</p>
<p>rather</p>
<p>might</p>
<p>say</p>
<p>see</p>
<p>possible text</p>
<p>make samplefirstthing</p>
<p>thinkset workusingchange</p>
<p>thing</p>
<p>informationIRL</p>
<p>IRL</p>
<p>learning algorithm play framework</p>
<p>reinforcement learningfind Inverse Reinforcement control change</p>
<p>counterfactual</p>
<p>need</p>
<p>mean</p>
<p>find Inverse Reinforcement</p>
<p>play</p>
<p>control</p>
<p>propose</p>
<p>probability</p>
<p>people</p>
<p>seem simplepossiblefindtext</p>
<p>GPThumans make samplefirstthing
work</p>
<p>approachesactionfeedback</p>
<p>proof</p>
<p>instead</p>
<p>reason</p>
<p>part</p>
<p>map</p>
<p>training</p>
<p>given order</p>
<p>preference</p>
<p>change</p>
<p>oracle</p>
<p>paper</p>
<p>new</p>
<p>architecture</p>
<p>humans make sample</p>
<p>papertechniquepointarchitecture showlook set</p>
<p>point</p>
<p>provide</p>
<p>think</p>
<p>Note</p>
<p>result</p>
<p>set</p>
<p>might</p>
<p>information</p>
<p>time</p>
<p>given</p>
<p>prediction examples</p>
<p>change</p>
<p>feedback</p>
<p>approachesfeedbackimage trajectory</p>
<p>learn</p>
<p>policynow</p>
<p>know</p>
<p>defined</p>
<p>show</p>
<p>look</p>
<p>system</p>
<p>case</p>
<p>make</p>
<p>problems</p>
<p>image</p>
<p>algorithm</p>
<p>without</p>
<p>provide</p>
<p>showlook set change provide</p>
<p>method dataprediction examples</p>
<p>order</p>
<p>prior Note</p>
<p>givenmeanresult</p>
<p>technique</p>
<p>two</p>
<p>utility function</p>
<p>least</p>
<p>reasonpartpriormapnew Noteoraclepaper</p>
<h3 class="relative group flex items-center">
<a
id="way"
class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full"
href="#way"
>
<span class="header-link"><svg class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
way
</span>
</h3>
<p>look</p>
<p>given</p>
<p>goal</p>
<p>system</p>
<p>universe</p>
<p>prior</p>
<p>work</p>
<p>proof</p>
<p>instead</p>
<p>look</p>
<p>use</p>
<p>world</p>
<p>assumption</p>
<p>author</p>
<p>prediction</p>
<p>different</p>
<p>dynamic</p>
<p><strong>cluster 4 - AI Governance</strong></p>
<p>information whether impact first thinggroupwaypoint</p>
<p>muchthinkproblems takeview humans field casegoaltime</p>
<p>work</p>
<p>researcheralignmentdevelopmenthelp worlddon discussion</p>
<p><strong>cluster 5 - value learning</strong></p>
<p>group</p>
<p>way</p>
<p>alignmentgiven</p>
<p>whether</p>
<p>thinkproblems view</p>
<p>first</p>
<p>thing</p>
<p>case</p>
<p>information</p>
<p>whether</p>
<p>impact</p>
<p>AGI</p>
<p>people</p>
<p>something</p>
<p>point</p>
<p>given</p>
<p>time</p>
<p>well</p>
<p>rather</p>
<p>know</p>
<p>make</p>
<p>first</p>
<p>problems</p>
<p>much</p>
<p>view</p>
<p>take</p>
<p>humans</p>
<p>problems</p>
<p>field</p>
<p>time</p>
<p>goal</p>
<p>action</p>
<p>consider</p>
<p>understand aligned</p>
<p>value</p>
<p>work</p>
<p>safety</p>
<p>researcher</p>
<p>discussion</p>
<p>worlddon</p>
<p>don</p>
<p>don discussion</p>
<p>mightrelated</p>
<p>argument</p>
<p>help</p>
<p>related</p>
<p>development</p>
<p>topic</p>
<p>information</p>
<p>better</p>
<p>see</p>
<p>goal</p>
<p>seem</p>
<p>see</p>
<p>useful</p>
<p>take</p>
<p>particular</p>
<p>still</p>
<p>need design</p>
<p>need</p>
<p>help</p>
<p>much</p>
<p>yearnew progress welllikely importantvaluemean goodrelated</p>
<p>postslot use</p>
<h6 class="relative group flex items-center">
<a
id="researchprobably-mayless-many-part"
class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full"
href="#researchprobably-mayless-many-part"
>
<span class="header-link"><svg class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
researchprobably mayless many part
</span>
</h6>
<p>peopleproject idea future technologyAGIthree</p>
<p>lead</p>
<p>paper now governancewant link reason</p>
<p>givesystem two</p>
<p>year</p>
<p>developmenthelp</p>
<p>see</p>
<p>progress welllikely</p>
<p>risk</p>
<p>well</p>
<p>using</p>
<p>new progress welllikely</p>
<p>value</p>
<p>importantmean</p>
<p>good</p>
<p>now</p>
<p>lot</p>
<p>mean</p>
<p>use</p>
<p>look risk</p>
<p>two</p>
<p>usecasenow</p>
<p>case</p>
<p>part</p>
<p>probablypeopleproject idea</p>
<p>future</p>
<p>technology</p>
<p>brain</p>
<p>many</p>
<p>posts</p>
<p>probably</p>
<p>even</p>
<p>AGIthree</p>
<p>project</p>
<p>idea</p>
<p>may</p>
<p>less</p>
<p>three</p>
<p>approachthingquestionargumentevenactually</p>
<p>argument</p>
<p>important</p>
<p>concept</p>
<p>behavior</p>
<p>may</p>
<p>lead</p>
<p>don</p>
<p>answer</p>
<p>new</p>
<p>argumentway</p>
<p>good</p>
<p>need</p>
<p>thinking</p>
<p>paper</p>
<p>now</p>
<p>governance</p>
<p>governancewant</p>
<p>reason</p>
<p>link</p>
<p>questionparticularchange hard</p>
<p>different</p>
<p>part</p>
<p>world</p>
<p>give</p>
<p>idea</p>
<p>different</p>
<p>reward</p>
<p>whether</p>
<p>learn</p>
<p>two</p>
<p>algorithm</p>
<p>particularchange hard</p>
<p>change</p>
<p>even</p>
<p>link reason</p>
<p>evensafetydifferent</p>
<p>relevant</p>
<p>Supplementary Figure 1: <strong>Word frequency visualization for different clusters.</strong> Wordcloud representation (word_cloud package in Python) of the most commonly used words
in articles of the five identified clusters in Fig. 2. The following words occurred in all
clusters very often and were thus removed from the wordcloud: "will", "post", "problem",
"example", "one", "SEP", "AI", "agent", "human", "model", and "models".</p>
<p>17</p>
<p>Understanding AI alignment research: A Systematic Analysis</p>
<p>Supplementary Figure 2: <strong>Interactive embedding of AI alignment literature.</strong> An interactive plot (plotly.com) of an UMAP projection of AI alignment research that displays
the title of a selected article.</p>
<p>18</p>
<p>Understanding AI alignment research: A Systematic Analysis</p>
<p>Supplementary Figure 3: <strong>Summarization tool.</strong> An early prototype of a summarization
service for AI alignment research articles. We finetuned a 6B GPT-J language model [53,54]</p>
<p>on the collected dataset and designed a prompt that produces a short summary of a provided AI alignment research article.</p>
<p>19</p>
<p>Understanding AI alignment research: A Systematic Analysis</p>
<p>Supplementary Figure 4: <strong>Prototype of semantic search engine.</strong> After entering the
URL of an Alignment Forum post (top left), the article is extracted (bottom left) and embedded with the Allen SPECTER model [41] . The resulting embedding is compared with
<a href="https://www.pinecone.io/" rel="nofollow">all embeddings with a vector database search service (Pinecone.io) to retrieve similar</a>
articles (middle column). By clicking the "Explain" button on a search result, a query
with the abstract of the original article and the search result is sent to the OpenAI API
to generate an analysis of similarities and differences (right column).</p>
<p>20</p>