alignments
/
alignment-papers-text
/2206.02841_Researching_Alignment_Research_Unsupervised_Analys.md
| Understanding AI alignment research: A | |
| Systematic Analysis | |
| Jan H. Kirchner [β] Logan Smith [β] | |
| kirchner.jan@icloud.com logansmith5@gmail.com | |
| Jan H. Kirchner [β] | |
| Logan Smith [β] Jacques Thibodeau [β] | |
| logansmith5@gmail.com thibo.jacques@gmail.com | |
| thibo.jacques@gmail.com | |
| Kyle McDonell Laria Reynolds | |
| kyle@conjecture.dev laria@conjecture.dev | |
| Kyle McDonell | |
| laria@conjecture.dev | |
| **Abstract** | |
| AI alignment research is the field of study dedicated to ensuring that artificial intelligence | |
| (AI) benefits humans. As machine intelligence gets more advanced, this research is | |
| becoming increasingly important. Researchers in the field share ideas across different | |
| media to speed up the exchange of information. However, this focus on speed means that | |
| the research landscape is opaque, making it difficult for young researchers to enter the | |
| field. In this project, we collected and analyzed existing AI alignment research. We found | |
| that the field is growing quickly, with several subfields emerging in parallel. We looked | |
| at the subfields and identified the prominent researchers, recurring topics, and different | |
| modes of communication in each. Furthermore, we found that a classifier trained on | |
| AI alignment research articles can detect relevant articles that we did not originally | |
| include in the dataset. We are sharing the dataset with the research community and | |
| hope to develop tools in the future that will help both established researchers and young | |
| researchers get more involved in the field. | |
| **Introduction** | |
| _AI alignment research_ is a nascent field of | |
| research concerned with developing machine intelligence in ways that achieve desirable outcomes and avoid adverse outcomes [1,2] . While the term _alignment prob-_ | |
| _lem_ was originally proposed to denote the | |
| problem of "pointing an AI in a direction" [3], | |
| the term _AI alignment research_ is now used | |
| as an overarching term referring to the entire research field associated with this problem [2,4β9] . Associated lines of research include the question of how to infer human | |
| values as revealed by preferences [10], how to | |
| prevent risks from learned optimization [11], | |
| or how to set up an appropriate structure | |
| of governance to facilitate coordination [12] . | |
| βThese authors contributed equally. | |
| As machine intelligence becomes increasingly capable [13,14], AI alignment research | |
| becomes increasingly important. There is a | |
| risk that if machine intelligence is not carefully designed, it could have catastrophic | |
| consequences for humanity [15β17] . For example, if machine intelligence is not designed to take human values into account, | |
| it could make decisions that are harmful | |
| to humans [15] . Alternatively, if machine intelligence is not designed to be transparent and understandable to humans, it could | |
| make decisions that are opaque to humans | |
| and difficult to understand or reverse [18] . As | |
| machine intelligence rapidly becomes more | |
| powerful [14], the stakes associated with the | |
| AI alignment problem only grow. Consequently, the field receives considerable at | |
| 1 | |
| Understanding AI alignment research: A Systematic Analysis | |
| tention from philanthropic organizations | |
| searching to increase the speed and scope | |
| of research [19,20] . | |
| One interesting feature of AI alignment | |
| research is how the researchers communicate: to increase the speed and bandwidth of information exchange, novel insights and ideas are exchanged across various media. Beyond the traditional research article published as a preprint or | |
| conference article, a substantial portion of | |
| AI alignment research is communicated on | |
| a curated community forum: the Alignment Forum [21] . Other channels of communication include formal and informal | |
| talks [22], semi-publicly shared manuscripts | |
| and notes [17,23], and informal exchanges via | |
| instant messaging [24] . | |
| The strong focus on increased speed and | |
| bandwidth of communication comes at the | |
| cost of a diffuse research landscape, making it difficult for newcomers to orient | |
| themselves [25,26] . These difficulties are exacerbated by the short time the field has | |
| existed and the resulting lack of unifying paradigms [27,28] . Previous attempts to | |
| catalog and classify existing AI alignment | |
| research [29β32] do not include all relevant | |
| sources, are not kept up-to-date, and do | |
| not provide easy access to the data in a | |
| machine-readable format. Given the potential importance of AI alignment research and the attempts to increase the size | |
| of the field [19,20], the lack of a coherent | |
| overview of the research landscape represents a major bottleneck. | |
| In this project, we collected and cataloged | |
| AI alignment research literature and analyzed the resulting dataset in an unbiased | |
| 2 | |
| way to identify major research directions. | |
| We found that the field is growing rapidly, | |
| with several subfields emerging naturally | |
| over time. By analyzing the emerging subfields, we can identify the prominent researchers working in the subfield, recurring topics and questions specific to each | |
| subfield, and different modes of communication dominating each subfield. Finally, | |
| training a classifier to distinguish AI alignment research from more general AI research can automatically detect relevant articles published too recently to be included | |
| in our dataset. We make our dataset and the | |
| analysis publicly available to interested researchers to enable further analysis and facilitate orientation to the field. | |
| **Results** | |
| To capture the current state of AI alignment research, we collected research articles from various sources (Tab. 1). Beyond the full-length manuscript published | |
| on arXiv ( _π_ = 707), we also included | |
| shorter communications published on the | |
| Alignment Forum ( _π_ = 2 _,_ 138), blogs, and | |
| personal websites ( _π_ = 1 _,_ 326), publicly | |
| available, full-length books ( _π_ = 23), a | |
| popular AI alignment research newsletter | |
| with summaries of articles ( _π_ = 420), fulllength manuscripts not published on arXiv | |
| ( _π_ = 372), transcripts of lectures and interviews ( _π_ = 494), and entries from public | |
| wikis ( _π_ = 582). To establish a baseline for | |
| our analysis, we also collected research articles from adjacent ( _π_ = 1 _,_ 679) and unrelated ( _π_ = 1 _,_ 000) areas of research, as | |
| well as shorter communications published | |
| on the LessWrong Forum ( _π_ = 28 _,_ 259). | |
| For details about our collection procedure, | |
| Understanding AI alignment research: A Systematic Analysis | |
| see the Methods section. | |
| **Rapid growth of AI alignment** | |
| **research from 2012 to 2022 across** | |
| **two platforms.** | |
| There was substantial heterogeneity in the | |
| form and quality of articles in the dataset. | |
| We decided to focus on articles published | |
| on the Alignment Forum and as preprints | |
| on the arXiv server (see Methods for arXiv | |
| inclusion criteria). These sources contain | |
| a large portion of the entire published AI | |
| alignment research (Tab. 1) and are structured in a consistent form that allows automated analysis. | |
| To quantify the fieldβs growth, we visualized the number of articles published on | |
| either platform as a function of time. We | |
| found a rapid increase from 2017 [3] to 2022 | |
| (present) from less than 20 articles per year | |
| to over 400 (Fig. 1a). When calculating the | |
| number of articles published per researcher, | |
| we observed a long-tailed distribution with | |
| most researchers publishing less than five | |
| articles and some publishing more than 60 | |
| (Fig. 1b). Finally, when comparing the | |
| number of researchers per article on the | |
| Alignment Forum and the arXiv, we noticed that articles on the Alignment Forum | |
| tend to be written by either just a single | |
| author or by a small team of fewer than | |
| five researchers (Fig. 1c; purple). In contrast, the distribution of authors on arXiv | |
| articles is long-tailed and includes articles | |
| with more than 60 authors [35β37] (Fig. 1c; | |
| green). This asymmetry partially results | |
| from the late introduction of the multi | |
| 3We note that the Alignment Forum was created | |
| in 2018 [34] . | |
| ple authors feature to the Alignment Forum [1], but might also reflect the Alignment Forumβs focus on speed of communication, which disincentivizes large collaborations [38] . Alternatively, the larger number | |
| of authors on arXiv articles might also reflect inflation of (unjustified) authorship on | |
| research articles [39,40] . | |
| Thus, AI alignment research is a rapidly | |
| growing field, driven by many researchers | |
| contributing individual articles and a few | |
| publishing prolifically. | |
| **Unsupervised decomposition of** | |
| **AI alignment research into dis-** | |
| **tinct clusters.** | |
| Given the collected AI alignment research | |
| articles from the Alignment Forum and | |
| arXiv, we were curious whether we could | |
| use the text to understand the current state | |
| of research. To this end, we used the Allen | |
| SPECTER model [41] to compute a sentence | |
| embedding, followed by a UMAP projection [42] to obtain a low-dimensional representation (Fig. 2a). While there is a tendency for articles from different sources to | |
| occupy different regions of the embedding, | |
| the transition between Alignment Forum | |
| and arXiv is fluent (Fig. 2b). Interestingly, | |
| when visualizing the publication date, we | |
| noticed that the embedding captures part of | |
| the temporal evolution of the field (Fig. 2c). | |
| Due to the relative youth of the field, | |
| there is no universally-accepted decomposition of AI alignment research into subfields [22,28,43] . To see if we can produce a | |
| 1The feature to add multiple authors didnβt become | |
| available to all users until 2019, and many people may | |
| still not be aware of how to do it. | |
| 3 | |
| Understanding AI alignment research: A Systematic Analysis | |
| **source** **domain** **# of articles** | |
| |Alignment Forum|alignmentforum.org<br>lesswrong.com|2,138<br>28,252| | |
| |---|---|---| | |
| |**arXiv**|AI alignment research (level-0)<br>AI research (level-1)<br>arXiv.org/search/?query=quantum<br>arXiv.org/list/cs.AI (filtered)|707<br>1,679<br>1,000<br>4,621| | |
| |**Books**|(available upon request)|23| | |
| |**Blogs**|aiimpacts.org<br>aipulse.org<br>aisafety.camp<br>carado.moe<br>cold-takes.com<br>deepmindsafetyresearch.medium.com<br>generative.ink<br>gwern.net<br>intelligence.org<br>jsteinhardt.wordpress.com<br>qualiacomputing.com<br>vkrakovna.wordpress.com<br>waitbutwhy.com<br>yudkowsky.net|227<br>23<br>8<br>59<br>111<br>10<br>17<br>7<br>479<br>39<br>278<br>43<br>2<br>23| | |
| |**Newsletter**|rohinshah.com/alignment-newsletter/ summaries|420| | |
| |**Reports**|pdf-only articles<br>distill.pub|323<br>49| | |
| |**Audio transcripts**|youtube.com playlist 1 & 2<br>Assorted transcripts<br>interviews with AI researchers33|457<br>25<br>12| | |
| |**Wikis**|arbital.com<br>lesswrong.com (Concepts Portal)<br>stampy.ai|223<br>227<br>132| | |
| |**Total:**|Total token count: 89,240,129<br>Total word count: 53,550,146<br>Total character count: 351,757,163|| | |
| Table 1: **Different sources of text included in the dataset alongside the number** | |
| **of articles per source.** Color of row indicates that data was analyzed as AI alignment | |
| research articles (green) or baseline (gray), or that the articles were added to the dataset | |
| as a result of the analysis in Fig. 4 (purple). Definition of level-0 and level-1 articles in | |
| Fig. 4c. For details about our collection procedure see the Methods section. | |
| 4 | |
| Understanding AI alignment research: A Systematic Analysis | |
| 104 | |
| 102 | |
| 1 | |
| 10 [4] | |
| 10 [2] | |
| 1 | |
| **c** | |
| |Col1|S. Armstrong<br>S. Garrabrant<br>A. Demsk<br>J. Wentworth<br>P. Christiano<br>S. Levine| | |
| |---|---| | |
| ||| | |
| 0 60 - 60 | |
| articles per researcher | |
| researchers per article | |
| 400 | |
| 200 | |
| 0 | |
| 2000 2010 2020 | |
| time (years) | |
| Figure 1: **Alignment research across a community forum and a preprint server.** ( **a** ) | |
| Number of articles published as a function of time on the Alignment Forum (AF; purple) | |
| and the arXiv preprint server (arXiv; green). ( **b** ) Histogram of the number of articles | |
| per researcher published on either AF or arXiv. Inset shows names of six researchers | |
| with more than 60 articles. Note the logarithmic y-axis. ( **c** ) Histogram of the number | |
| of researchers per article on AF (purple) and arXiv (green). Note the logarithmic y-axis. | |
| **a** | |
| title | |
| + | |
| embedding | |
| (768 dim.) | |
| UMAP | |
| embedding | |
| (2 dim.) | |
| Figure 2: **Dimensionality reduction and unsupervised clustering of alignment** | |
| **research.** ( **a** ) Schematic of the embedding and dimensionality reduction. After | |
| concatenating title and abstract of articles, we embed the resulting string with the | |
| Allen SPECTER model [41], and then perform UMAP dimensionality reduction with | |
| n_neighbors=250. ( **b** ) UMAP embedding of articles with color indicating the source | |
| (AF, purple; arXiv, green). ( **c** ) UMAP embedding of articles with color indicating date | |
| of publication. Arrows superimposed to indicate direction of temporal evolution. ( **d** ) | |
| UMAP embedding of articles with color indicating cluster membership as determined | |
| with k-means (k=5). Inset shows sum of residuals as a function of clusters k, with an | |
| arrow highlighting the chosen number of clusters. | |
| 5 | |
| Understanding AI alignment research: A Systematic Analysis | |
| useful, unbiased decomposition of the research landscape, we applied k-means clustering to the SPECTER embedding to obtain five distinct clusters (see Methods for | |
| details). | |
| In summary, combined semantic embedding and dimensionality reduction produce | |
| a compact visualization of AI alignment research. | |
| **Research dynamics vary across** | |
| **the identified clusters.** | |
| Having identified five distinct research | |
| clusters, we asked ourselves if we could | |
| find natural descriptions of research topics | |
| and prominent researchers. Therefore, we | |
| inspected which researchers tend to publish the highest number of articles in each | |
| cluster (Tab. 2). Even though the names | |
| of researchers did not enter into the Allen | |
| SPECTER sentence embedding (Fig. 2a), | |
| we observed that different researchers tend | |
| to dominate different research clusters. The | |
| distribution of researchers across clusters | |
| lead us to assign putative labels to the clusters (Fig. 3a): | |
| - **cluster one** : _Agent alignment_ is concerned | |
| with the problem of aligning agentic systems, i.e. those where an AI performs actions in an environment and is typically | |
| trained via reinforcement learning. | |
| - **cluster two** : _Alignment foundations_ is concerned with _deconfusion_ research, i.e. the | |
| task of establishing formal and robust conceptual foundations for current and future | |
| AI alignment research. | |
| - **cluster three** : _Tool alignment_ is concerned with the problem of aligning nonagentic (tool) systems, i.e. those where an | |
| 6 | |
| AI transforms a given input into an output. The current, prototypical example of | |
| tool AIs is the "large language model" [35,44] . | |
| - **cluster four** : _AI governance_ is concerned | |
| with how humanity can best navigate the | |
| transition to advanced AI systems. This | |
| includes focusing on the political, economic, military, governance, and ethical | |
| dimensions [12] . | |
| - **cluster five** : _Value alignment_ is concerned | |
| with understanding and extracting human | |
| preferences and designing methods that | |
| stop AI systems from acting against these | |
| preferences. | |
| To corroborate these putative labels, we | |
| computed a word cloud representation of | |
| the articles (Sup. Fig. 1). We found the recurring words specific to each cluster to be | |
| in good agreement with the labels. We also | |
| note that our labels are consistent with our | |
| observation that alignment foundations research is the historical origin of AI alignment research (Fig. 2c, Fig. 3b,c). Furthermore, we observe that theoretical research | |
| (alignment foundations, value alignment, | |
| AI governance) tends to be published on | |
| the Alignment Forum. In contrast, applied | |
| research (agent alignment, tool alignment) | |
| tends to be published on arXiv (Fig. 2b, | |
| Fig. 3d). Finally, we note that in the | |
| alignment foundations cluster, a few individual researchers tend to produce a disproportionate number of research articles | |
| (Fig. 3e). | |
| In combination, these arguments make us | |
| hopeful that our unsupervised decomposition of AI alignment research mirrors relevant structures existing in the field. We | |
| hope to leverage the decomposition to pro | |
| Understanding AI alignment research: A Systematic Analysis | |
| |cluster 1; π= 567 (agent alignment)|cluster 2; π= 988 (alignment founda- tions)|cluster 3; π= 593 (tool alignment)|cluster 4; π= 383 (AI governance)|cluster 5; π= 670 (value alignment)| | |
| |---|---|---|---|---| | |
| |S. Levine (55)|S. Armstrong (154)|J. Steinhardt (20)|D. Kokotajlo (21)|S. Armstrong (54)| | |
| |P. Abbeel (34)|S. Garrabrant (95)|D. Hendrycks (17)|A. Dafoe (19)|S. Byrnes (32)| | |
| |A. Dragan (29)|A. Demski (94)|E. Hubinger (14)|G. Worley III (11)|P. Christiano (29)| | |
| |S. Russell (23)|J. Wentworth (57)|P. Christiano (13)|J. Clarck (10)|R. Ngo (25)| | |
| |S. Armstrong (22)|"Diο¬ractor" (44)|P. Kohli (11)|S. Armstrong (9)|R. Shah (25)| | |
| Table 2: **Researchers with the highest number of articles per cluster.** Clusters as | |
| _π_ | |
| determined in Fig. 2, with number of articles per cluster . Number in brackets behind researcher name indicates number of articles published by that researcher. Note: | |
| "Diffractor" is an undisclosed pseudonym. | |
| **a** _βalignment_ | |
| 1 | |
| alignment | |
| forum arxiv | |
| 1 | |
| 0 | |
| **e** | |
| 1 | |
| 0 | |
| **d** | |
| 2.5k | |
| _βvalue_ | |
| _alignmentβ_ _alignmentβ_ | |
| 0 | |
| 2015 2018 2021 | |
| time (years) | |
| 0 | |
| 2015 2018 2021 | |
| time (years) | |
| |Col1|Col2|Col3|Col4|Col5| | |
| |---|---|---|---|---| | |
| |||||| | |
| |<br>|<br> <br>|<br> <br>|<br> <br>|<br> <br>| | |
| Figure 3: **Characteristics of research clusters corroborate potential usefulness of** | |
| **decomposition.** ( **a** ) UMAP embedding of articles with color indicating cluster membership as in Fig. 2d. Labels assigned to each cluster are putative descriptions of a common | |
| research focus across articles in the cluster. ( **b** ) Number of articles published per year, colored by cluster membership. ( **c** ) Fraction of articles published by cluster membership as a | |
| function of time. ( **d** ) Fraction of articles from AF or arXiv as a function of cluster membership. ( **e** ) GINI inequality coefficient of articles per researcher as a function of article | |
| cluster membership. | |
| 7 | |
| Understanding AI alignment research: A Systematic Analysis | |
| vide researchers structured access to the existing literature in future work. | |
| **Leveraging dataset to train an AI** | |
| **alignment research classifier.** | |
| When quantifying the number of articles | |
| across different sources, we noticed a dramatic drop-off in articles published on the | |
| arXiv after 2019 (Fig. 1a). Especially in | |
| contrast with the continued strong increase | |
| in articles published on the Alignment Forum, we suspected that our data collection | |
| might have missed some more recent, relevant work [1] . | |
| To automatically detect articles published | |
| more recently, we decided to train a logistic regression classifier on the semantic embeddings of arXiv articles. Besides | |
| the AI alignment research articles already | |
| included in our dataset ("arXiv level-0"; | |
| Fig. 4a green), we also collected all arXiv | |
| articles cited by level-0 articles, which | |
| were not level-0 articles themselves ("arXiv | |
| level-1"; Fig. 4a blue). We trained the classifier on a training set (80%) to distinguish | |
| level-0 from level-1 articles and evaluated | |
| performance on a separate test set (20%). | |
| The classifier achieved good performance | |
| (AUC= 0 _._ 75; Fig. 4b inset), reliably rejecting level-1 articles and correctly identifying a large portion of level-0 articles | |
| (Fig. 4b). To test whether the classifier robustly generalizes beyond AI research, we | |
| tested it on 1000 recently published articles | |
| on quantum physics and the Alignment Forum. We found that the classifier reliably | |
| 1In particular, for our dataset, we manually extended an existing collection of arXiv articles from | |
| 2020 [31], see Methods section for details. | |
| 8 | |
| rejects quantum physics and accepts Alignment Forum articles (Fig. 4b,d). | |
| Most AI alignment research articles on the | |
| arXiv are published in the cs.AI section. | |
| Therefore we used the arXiv API [45] to collect all articles from that section (Fig. 4c). | |
| When applying our classifier to the semantic embeddings of the cs.AI articles, | |
| we observed a slightly bimodal distribution | |
| with most articles receiving a score close | |
| to 0%, and some articles receiving a score | |
| close to 100% (Fig. 4d). Motivated by the | |
| distribution of scores of Alignment Forum | |
| articles and by individual inspection, we | |
| chose a threshold at 75% and considered articles above that threshold as AI alignment | |
| research-relevant and added them to our | |
| dataset. As anticipated, we found that the | |
| number of AI alignment-relevant arXiv articles increases as rapidly over time as the | |
| articles published on the Alignment Forum | |
| (Fig. 4e). Finally, to verify that the addition of AI alignment-relevant arXiv articles | |
| does not affect our unsupervised decomposition, we repeated the UMAP dimensionality reduction on the updated dataset. We | |
| found that cluster structure is not disrupted | |
| (Fig. 4f ). | |
| In conclusion, our analysis demonstrates | |
| that semantic embedding can capture relevant characteristics of AI alignment research and that automatic filtering of new | |
| publications might be feasible. | |
| **Discussion** | |
| The field of AI alignment research is growing quickly, with many researchers publishing articles on diverse topics. We | |
| found that semantic embedding and di | |
| Understanding AI alignment research: A Systematic Analysis | |
| **e** 1500 | |
| train | |
| test | |
| clu. 1 clu. 2 clu. 3 | |
| clu. 4 clu. 5 | |
| 0 | |
| 2000 2010 2020 | |
| time (years) | |
| **b** | |
| 0.6 | |
| 0.2 | |
| 0 | |
| 0 | |
| **f** | |
| 0.3 | |
| 0.1 | |
| |1 d quantum physics|Col2|Col3| | |
| |---|---|---| | |
| |0<br>1<br>FP rate<br>0<br>1<br>TP rate<br>AUC<br>0.75<br>arxiv<br>level-0<br>arxiv<br>level-1<br>|0<br>1<br>FP rate<br>0<br>1<br>TP rate<br>AUC<br>0.75<br>arxiv<br>level-0<br>arxiv<br>level-1<br>|0<br>1<br>FP rate<br>0<br>1<br>TP rate<br>AUC<br>0.75<br>arxiv<br>level-0<br>arxiv<br>level-1<br>| | |
| |||| | |
| |Col1|Col2|cutoff|Col4| | |
| |---|---|---|---| | |
| ||alignment<br>forum<br>arxiv<br>cs.AI<br>|alignment<br>forum<br>arxiv<br>cs.AI<br>|alignment<br>forum<br>arxiv<br>cs.AI<br>| | |
| ||||| | |
| ||||| | |
| Figure 4: **An AI alignment research classifier for filtering new publications.** ( **a** ) | |
| Top: Illustration of arXiv level-0 articles (alignment research; green) and level-1 articles (cited by alignment research articles; blue). Bottom: Schematic of test-train split | |
| (20%-80% for training of a logistic regression classifier. ( **b** ) Fraction of articles as a function | |
| of classifier score for arXiv level-0 (green), level-1 (blue), and arXiv articles on quantum | |
| physics (grey). ( **c** ) Illustration of procedure for filtering arXiv articles. After querying | |
| articles from the cs.AI section of arXiv, the logistic regression classifier assigns a score | |
| between 0 and 1. ( **d** ) Fraction of articles as a function of classifier score for articles from | |
| the cs.AI section of arXiv (grey) and AF (purple). Dashed line indicates cutoff for classifying articles as arXiv level-0 (75%). ( **e** ) Number of articles published as a function of | |
| time on AF (purple) and arXiv (green), according to the cutoff in panel **d** . ( **f** ) Left inset: | |
| Original UMAP embedding from Fig. 2d. Right: UMAP embedding of all original articles and updated arXiv articles with color indicating cluster membership as in Fig. 2d | |
| or that the article is filtered from the arXiv (gray). | |
| 9 | |
| Understanding AI alignment research: A Systematic Analysis | |
| mensionality reduction can produce a compact visualization of AI alignment research. | |
| This decomposition of AI alignment research mirrors known structures in the | |
| field, demonstrating that semantic embedding can capture relevant characteristics of | |
| AI alignment research. Furthermore, we | |
| demonstrate the possible feasibility of automatically detecting new publications relevant to AI alignment research. In the future, we hope that our decomposition can | |
| provide researchers with structured access | |
| to the existing literature. | |
| **Tools for alignment researchers.** Our | |
| presented research suggests several exciting possible applications for improving | |
| the research landscape in AI alignment | |
| research. We have begun to explore | |
| this potential by developing several prototypes that use the collected dataset to | |
| interactively explore semantic embeddings | |
| (Sup. Fig. 2), to provide summaries of long | |
| articles (Sup. Fig. 3), or to search and compare articles (Sup. Fig. 4). Thanks to the | |
| focus on speed and the openness to innovation of the AI alignment research community, we believe that tools tailored to | |
| this community might reach broad adoption and help accelerate research efforts. | |
| **Paradigmatic AI alignment research.** In | |
| the language of Thomas Kuhn [27], the successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science. Some | |
| researchers argue that AI alignment research is pre-paradigmatic, meaning that it | |
| has not yet converged on a single, dominant paradigm or approach. While our | |
| research demonstrates that decomposition | |
| 10 | |
| of AI alignment research into meaningful | |
| subfields is possible, we note that the choice | |
| of the number of subfields has a subjective | |
| component (Fig. 2d). Furthermore, the semantic similarity between articles in a cluster does not imply similarity in methodology or underlying research agenda. However, we do not believe that this implies the | |
| impossibility of progress. In fact, the current exploratory nature of AI alignment research might be a strength, as exploration | |
| helps to avoid ossification. | |
| **Limitations.** Especially due to the rapid | |
| expansion of the field (Fig. 1), classifications and descriptions of the state-of-theart might become inaccurate soon after | |
| publication. While the observation that | |
| our clustering remains stable after including many articles not used for the original | |
| clustering (Fig. 4) makes us hopeful, we still | |
| plan to carefully monitor the field and publish regular updates to our analysis. | |
| The decision to focus on the two largest, | |
| non-redundant sources of articles (Alignment Forum and arXiv) might systematically exclude certain lines of research and | |
| thus bias our analysis. However, as a substantial fraction of blog posts, reports and | |
| the alignment newsletter tend to be crossposted or announced on the Alignment Forum we think a strong bias is unlikely. | |
| In summary, by collecting a comprehensive dataset of published AI alignment research literature, we demonstrate rapid | |
| growth of the field over the last five years | |
| and identify emerging directions of research through unbiased clustering. | |
| Understanding AI alignment research: A Systematic Analysis | |
| **Methods** | |
| **Data collection and inclusion criteria.** | |
| - **Alignment Forum & LessWrong:** We | |
| extracted all posts on the forum viewer | |
| website [GreaterWrong.com](https://www.greaterwrong.com/) on March | |
| 21st, 2022 (dataset used for the analysis | |
| in this article) and June 4th (dataset published). We excluded articles with the tag | |
| "event", which are published for coordinating meetups. | |
| - **arXiv:** We extended an existing collection of AI alignment research arXiv articles [31] from 2020 with relevant publications published since then ("arXiv Level0"). We started with an existing bibliography of alignment literature [31] and augmented that collection with two other | |
| bibliographies [46,47], articles mentioned in | |
| the alignment newsletter, and articles we | |
| identified. We excluded articles that were | |
| not about AI alignment research. | |
| - **Books:** We converted ebooks into plain | |
| text files with pandoc. No text was excluded. | |
| - **Blogs:** We extracted individual articles | |
| from AI alignment research-relevant (as | |
| determined by the authors) blogs with the | |
| requests and the BeautifulSoup packages. No text was excluded. | |
| - **Newsletter:** We extracted summaries | |
| from the publicly available list of summaries and matched them with the respective original articles. | |
| - **Reports:** We extracted additional published articles that were only available as | |
| pdf files, by converting these files with | |
| grobid and cleaning the resulting files. | |
| No text was excluded. | |
| - **Audio transcripts:** We were able to locate some transcripts of interviews available online. For the rest, we used a | |
| voice-to-text service (otter.ai) to extract | |
| transcripts from AI alignment researchrelevant (as determined by the authors) | |
| recordings. We hired contractors to clean | |
| the resulting transcripts to correct formatting problems and spelling mistakes. After | |
| cleaning, no text was excluded. | |
| - **Wikis:** We extracted articles from two | |
| open Wikis on AI alignment research | |
| (arbital.com, (lesswrong.orgβs Concepts Portal and stampy.ai) through the | |
| export option on the website. | |
| **Data analysis.** We performed the dataset | |
| collection with Python 3.7 on commodity | |
| hardware and Google Colab and all data | |
| analysis with Python 3.7 in Google Colab. | |
| We created plots with the seaborn package [48] and post-processed them in Adobe Illustrator. | |
| **Semantic embedding.** We used the Allen | |
| SPECTER model [41] through the huggingface sentence transformer library [49] for embedding articles into a 768 dimensional | |
| vector space. The SPECTER model requires each article as <Title> + <SEP> + | |
| <Abstract>, where <SEP> is the separator | |
| token of the tokenizer. For articles from | |
| the arXiv, we used the author-submitted | |
| abstract as the <Abstract>. As articles from | |
| the Alignment Forum do not always have | |
| an author-submitted abstract, we instead | |
| used the first 2-5 paragraphs of the article | |
| as the <Abstract>. | |
| **Dimensionality reduction.** To compute | |
| a two-dimensional representation of the semantic embedding, we used the python | |
| 11 | |
| Understanding AI alignment research: A Systematic Analysis | |
| UMAP package [50] with a neighborhood | |
| parameter of n_neighborhood=250. Using | |
| a smaller or larger neighborhood did not | |
| affect the results, but at very small neighborhood values (n_neighborhood<40) the | |
| embedding became unstable. | |
| **Unsupervised clustering.** While we explored different clustering algorithms, we | |
| eventually converged on the k-means implementation of the scikit-learn package [51], | |
| which is straightforward to interpret while | |
| producing robust clustering across multiple | |
| instantiations. | |
| **Statistics.** All statistics were computed | |
| with the seaborn package [48] in python, with | |
| the exception of the GINI coefficient in | |
| Fig. 3, which we computed as half of the | |
| relative mean absolute difference [52], | |
| _π_ | |
| βοΈ | |
| _π_ =1 | |
| _π_ | |
| βοΈ | |
| _π_ =1 | |
| οΏ½οΏ½ _π₯π_ - _π₯_ _π_ οΏ½οΏ½ | |
| _,_ | |
| _π_ [2] _π₯_ | |
| 2 Β― | |
| where _π₯π_ is the number of articles of each | |
| _π₯_ | |
| researcher and Β― is the average number of | |
| articles across all researchers. | |
| **Logistic regression classifier.** To train | |
| the AI alignment research classifier, we | |
| used the LogisticRegression model of the | |
| scikit-learn package in Python [51] with an | |
| increased number of maximum iterations, | |
| max_iter=1000. For training, we used 80% | |
| of level-0 and level-1 arXiv papers. For | |
| evaluation in Fig. 4b we used the remaining 20% of level-0 and level-1 arXiv papers as well as 1000 arbitrarily chosen articles on quantum physics. For the analysis in Fig. 4c-f, we used the arXiv API [45] | |
| to collect all articles published in the cs.AI | |
| 12 | |
| section since its inception. | |
| **Code** **and** **data** **availability.** The | |
| dataset and all code for collecting | |
| the dataset is available on Github, | |
| [https://github.com/moirage/alignment-](https://github.com/moirage/alignment-research-dataset.git) | |
| [research-dataset.git.](https://github.com/moirage/alignment-research-dataset.git) Code for the data | |
| analysis is available upon request. | |
| **Acknowledgments** | |
| JK and LR were supported by funding | |
| from the Longterm Future Fund. We | |
| thank Daniel Clothiaux for help with writing the code and extracting articles. We | |
| thank Remmelt Ellen, Adam Shimi, and | |
| Arush Tagade for feedback on the research. | |
| We thank Chu Chen, Γmer Faruk Εen, | |
| Hey, Nihal Mohan Moodbidri, and Trinity | |
| Smith for cleaning the audio transcripts. | |
| **References** | |
| 1. Yudkowsky, E. The AI alignment | |
| problem: why it is hard, and where | |
| to start. _Symbolic Systems Distinguished_ | |
| _Speaker_ (2016). | |
| 2. Christian, B. _The alignment problem:_ | |
| _Machine learning and human values_ | |
| (WW Norton & Company, 2020). | |
| 3. Yudkowsky, E. _The Rocket Alignment_ | |
| _Problem_ en-US. 2018. | |
| 4. Russell, S. in _Human-Like Machine_ | |
| _Intelligence_ 3β23 (Oxford University | |
| Press Oxford, 2021). | |
| 5. Gabriel, I. Artificial intelligence, values, and alignment. _Minds and ma-_ | |
| _chines_ **30,** 411β437 (2020). | |
| Understanding AI alignment research: A Systematic Analysis | |
| 6. Ouyang, L., Wu, J., Jiang, X., | |
| Almeida, D., Wainwright, C. L., | |
| Mishkin, P., Zhang, C., Agarwal, S., | |
| Slama, K., Ray, A., _et al._ Training language models to follow instructions | |
| with human feedback. _arXiv preprint_ | |
| _arXiv:2203.02155_ (2022). | |
| 7. Kenton, Z., Everitt, T., Weidinger, L., | |
| Gabriel, I., Mikulik, V. & Irving, G. | |
| Alignment of language agents. _arXiv_ | |
| _preprint arXiv:2103.14659_ (2021). | |
| 8. Dafoe, A., Bachrach, Y., Hadfield, G., | |
| Horvitz, E., Larson, K. & Graepel, T. | |
| _Cooperative AI: machines must learn to_ | |
| _find common ground_ 2021. | |
| 9. Askell, A., Bai, Y., Chen, A., Drain, | |
| D., Ganguli, D., Henighan, T., Jones, | |
| A., Joseph, N., Mann, B., DasSarma, | |
| N., _et al._ A General Language Assistant as a Laboratory for Alignment. _arXiv preprint arXiv:2112.00861_ | |
| (2021). | |
| 10. Christiano, P. F., Leike, J., Brown, | |
| T., Martic, M., Legg, S. & Amodei, | |
| D. Deep reinforcement learning from | |
| human preferences. _Advances in neural_ | |
| _information processing systems_ **30** (2017). | |
| 11. Hubinger, E., van Merwijk, C., | |
| Mikulik, V., Skalse, J. & Garrabrant, | |
| S. Risks from learned optimization | |
| in advanced machine learning systems. _arXiv preprint arXiv:1906.01820_ | |
| (2019). | |
| 12. Dafoe, A. AI governance: a research | |
| agenda. _Governance of AI Program, Fu-_ | |
| _ture of Humanity Institute, University_ | |
| _of Oxford: Oxford, UK_ **1442,** 1443 | |
| (2018). | |
| 13. Grace, K., Salvatier, J., Dafoe, A., | |
| Zhang, B. & Evans, O. When will | |
| AI exceed human performance? Evidence from AI experts. _Journal of Arti-_ | |
| _ficial Intelligence Research_ **62,** 729β754 | |
| (2018). | |
| 14. Sevilla, J., Heim, L., Ho, A., Besiroglu, T., Hobbhahn, M. & Villalobos, P. Compute trends across | |
| three eras of machine learning. _arXiv_ | |
| _preprint arXiv:2202.05924_ (2022). | |
| 15. Bostrom, N. _Superintelligence_ (Dunod, | |
| 2017). | |
| 16. Ord, T. _The precipice: Existential risk_ | |
| _and the future of humanity_ (Hachette | |
| Books, 2020). | |
| 17. Carlsmith, J. Is Power-Seeking AI an | |
| Existential Risk? (2021). | |
| 18. Christiano, P. What failure looks like. | |
| _Alignment Forum_ (2019). | |
| 19. Beckstead, N. & Muehlhauser, L. | |
| _Potential Risks from Advanced Arti-_ | |
| _ficial Intelligence_ [https : / / www .](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence) | |
| [openphilanthropy . org / focus /](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence) | |
| [global - catastrophic - risks /](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence) | |
| [potential - risks - advanced -](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence) | |
| [artificial-intelligence (2022).](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence) | |
| 20. Foundation, F. _Potential Risks from Ad-_ | |
| _vanced Artificial Intelligence_ [https://](https://ftxfuturefund.org/) | |
| [ftxfuturefund.org/ (2022).](https://ftxfuturefund.org/) | |
| 21. Infrastructure, L. _Alignment Forum_ | |
| [https://www.alignmentforum.org/](https://www.alignmentforum.org/) | |
| (2022). | |
| 22. Christiano, P. _Current_ _work_ _in_ | |
| _AI alignment_ [https : / / forum .](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment) | |
| [effectivealtruism . org / posts /](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment) | |
| [63stBTw3WAW6k45dY](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment) / paul | |
| 13 | |
| Understanding AI alignment research: A Systematic Analysis | |
| [christiano - current - work - in -](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment) | |
| [ai-alignment (2022).](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment) | |
| 23. Cotra, A. _Draft report on AI timelines_ | |
| [https : / / www . lesswrong . com /](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) | |
| [posts / KrJfoZzpSDpnrv9va / draft -](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) | |
| [report-on-ai-timelines (2022).](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) | |
| 24. Institute, M. I. R. _Late_ _2021_ | |
| _MIRI_ _Conversations_ [https : / /](https://intelligence.org/late-2021-miri-conversations/) | |
| [intelligence . org / late - 2021 -](https://intelligence.org/late-2021-miri-conversations/) | |
| [miri-conversations/ (2022).](https://intelligence.org/late-2021-miri-conversations/) | |
| 25. HyvΓ€rinen, A.-M. How I failed to | |
| form views on AI safety. _Effective Al-_ | |
| _truism Forum_ (2022). | |
| 26. Wentworth, J. S. How To Get Into | |
| Independent Research On Alignment/Agency. _Alignment_ _Forum_ | |
| (2021). | |
| 27. Kuhn, T. S. _The structure of scien-_ | |
| _tific revolutions_ (Chicago University of | |
| Chicago Press, 1970). | |
| 28. Shimi, A. Epistemological Framing | |
| for AI Alignment Research. _Alignment_ | |
| _Forum_ (2021). | |
| 29. Miles, R. _Stampyβs Wiki_ [https : / /](https://stampy.ai/wiki/Stampy%5C%27s_Wiki) | |
| [stampy.ai/wiki/Stampy%5C%27s_](https://stampy.ai/wiki/Stampy%5C%27s_Wiki) | |
| [Wiki (2022).](https://stampy.ai/wiki/Stampy%5C%27s_Wiki) | |
| 30. Shah, R. _Alignment Newsletter_ [https:](https://rohinshah.com/alignment-newsletter/) | |
| [/ / rohinshah . com / alignment -](https://rohinshah.com/alignment-newsletter/) | |
| [newsletter/ (2022).](https://rohinshah.com/alignment-newsletter/) | |
| 31. Riedel, J. & Deibel, A. _AI Safety_ | |
| _Papers_ [https : / / ai - safety -](https://ai-safety-papers.quantifieduncertainty.org/) | |
| [papers . quantifieduncertainty .](https://ai-safety-papers.quantifieduncertainty.org/) | |
| [org/ (2022).](https://ai-safety-papers.quantifieduncertainty.org/) | |
| 32. Ought. _Elicit: The AI research assistant_ | |
| [https://elicit.org (2022).](https://elicit.org) | |
| 14 | |
| 33. Gates, V. _Transcripts_ _of_ _inter-_ | |
| _views_ _with_ _AI_ _researchers_ [https :](https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers) | |
| [/ / www . lesswrong . com / posts /](https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers) | |
| [LfHWhcfK92qh2nwku / transcripts -](https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers) | |
| [of - interviews - with - ai -](https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers) | |
| [researchers (2022).](https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers) | |
| 34. Arnold, R. _Announcing_ _Align-_ | |
| _mentForum.org_ _Beta_ [https : / /](https://www.lesswrong.com/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta) | |
| [www . lesswrong . com / posts /](https://www.lesswrong.com/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta) | |
| [JiMAMNAb55Qq24nES / announcing -](https://www.lesswrong.com/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta) | |
| [alignmentforum-org-beta (2022).](https://www.lesswrong.com/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta) | |
| 35. Bommasani, R., Hudson, D. A., Adeli, | |
| E., Altman, R., Arora, S., von Arx, S., | |
| Bernstein, M. S., Bohg, J., Bosselut, | |
| A., Brunskill, E., _et al._ On the opportunities and risks of foundation models. _arXiv preprint arXiv:2108.07258_ | |
| (2021). | |
| 36. Brundage, M., Avin, S., Wang, J., | |
| Belfield, H., Krueger, G., Hadfield, | |
| G., Khlaaf, H., Yang, J., Toner, H., | |
| Fong, R., _et al._ Toward trustworthy | |
| AI development: mechanisms for supporting verifiable claims. _arXiv preprint_ | |
| _arXiv:2004.07213_ (2020). | |
| 37. Chen, M., Tworek, J., Jun, H., Yuan, | |
| Q., Pinto, H. P. d. O., Kaplan, | |
| J., Edwards, H., Burda, Y., Joseph, | |
| N., Brockman, G., _et al._ Evaluating large language models trained on | |
| code. _arXiv preprint arXiv:2107.03374_ | |
| (2021). | |
| 38. Moshontz, H., Ebersole, C. R., Weston, S. J. & Klein, R. A. A guide for | |
| many authors: Writing manuscripts in | |
| large collaborations. _Social and Person-_ | |
| _ality Psychology Compass_ **15,** e12590 | |
| (2021). | |
| Understanding AI alignment research: A Systematic Analysis | |
| 39. PΓ΅der, E. Letβs correct that small mistake. _Journal of the American Society for_ | |
| _Information Science and Technology_ **61,** | |
| 2593β2594 (2010). | |
| 40. Lozano, G. A. The elephant in the | |
| room: multi-authorship and the assessment of individual researchers. | |
| _Current Science_ **105,** 443β445 (2013). | |
| 41. Cohan, A., Feldman, S., Beltagy, | |
| I., Downey, D. & Weld, D. S. | |
| Specter: Document-level representation learning using citationinformed transformers. _arXiv preprint_ | |
| _arXiv:2004.07180_ (2020). | |
| 42. McInnes, L., Healy, J. & Melville, | |
| J. Umap: Uniform manifold approximation and projection for | |
| dimension reduction. _arXiv preprint_ | |
| _arXiv:1802.03426_ (2018). | |
| 43. Critch, A. _Some AI research areas_ | |
| _and their relevance to existential safety_ | |
| [https : / / www . alignmentforum .](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1) | |
| [org / posts / hvGoYXi2kgnS3vxqb /](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1) | |
| [some - ai - research - areas - and -](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1) | |
| [their-relevance-to-existential-](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1) | |
| [1 (2022).](https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1) | |
| 44. Weidinger, L., Mellor, J., Rauh, M., | |
| Griffin, C., Uesato, J., Huang, P.-S., | |
| Cheng, M., Glaese, M., Balle, B., | |
| Kasirzadeh, A., _et al._ Ethical and social | |
| risks of harm from Language Models. _arXiv preprint arXiv:2112.04359_ | |
| (2021). | |
| 45. University, C. _arXiv API_ [https : / /](https://arxiv.org/help/api/) | |
| [arxiv.org/help/api/ (2022).](https://arxiv.org/help/api/) | |
| 46. Krakovna, V. _AI_ _safety_ _resources_ | |
| [https : / / vkrakovna . wordpress .](https://vkrakovna.wordpress.com/ai-safety-resources/) | |
| [com/ai-safety-resources/ (2022).](https://vkrakovna.wordpress.com/ai-safety-resources/) | |
| 47. Larks. _2021 AI Alignment Literature_ | |
| _Review and Charity Comparison_ [https:](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison) | |
| [/ / www . alignmentforum . org /](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison) | |
| [posts / C4tR3BEpuWviT7Sje / 2021 -](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison) | |
| [ai-alignment-literature-review-](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison) | |
| [and-charity-comparison (2022).](https://www.alignmentforum.org/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison) | |
| 48. Waskom, M. L. seaborn: statistical | |
| data visualization. _Journal of Open_ | |
| _Source Software_ **6,** 3021 (2021). | |
| 49. Reimers, N. & Gurevych, I. _Sentence-_ | |
| _BERT:_ _Sentence_ _Embeddings_ _using_ | |
| _Siamese BERT-Networks_ in _Proceedings_ | |
| _of the 2019 Conference on Empirical_ | |
| _Methods in Natural Language Process-_ | |
| _ing_ (Association for Computational | |
| Linguistics, 2019). | |
| 50. Sainburg, T., McInnes, L. & Gentner, | |
| T. Q. Parametric UMAP Embeddings for Representation and Semisupervised Learning. _Neural Computa-_ | |
| _tion_ **33,** 2881β2907 (2021). | |
| 51. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., | |
| Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., | |
| Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M. & | |
| Duchesnay, E. Scikit-learn: Machine | |
| Learning in Python. _Journal of Ma-_ | |
| _chine Learning Research_ **12,** 2825β2830 | |
| (2011). | |
| 52. Wikipedia contributors. _Gini coeffi-_ | |
| _cient β Wikipedia, The Free Encyclope-_ | |
| _dia_ [Online; accessed 27-May-2022]. | |
| 2022. | |
| 53. Wang, B. _Mesh-Transformer-JAX:_ | |
| _Model-Parallel_ _Implementation_ _of_ | |
| _Transformer Language Model with JAX_ | |
| 15 | |
| Understanding AI alignment research: A Systematic Analysis | |
| [https://github.com/kingoflolz/](https://github.com/kingoflolz/mesh-transformer-jax) | |
| [mesh-transformer-jax. 2021.](https://github.com/kingoflolz/mesh-transformer-jax) | |
| 54. Wang, B. & Komatsuzaki, A. _GPT-_ | |
| _J-6B: A 6 Billion Parameter Autore-_ | |
| _gressive Language Model_ [https : / /](https://github.com/kingoflolz/mesh-transformer-jax) | |
| [github . com / kingoflolz / mesh -](https://github.com/kingoflolz/mesh-transformer-jax) | |
| [transformer-jax. 2021.](https://github.com/kingoflolz/mesh-transformer-jax) | |
| 16 | |
| Understanding AI alignment research: A Systematic Analysis | |
| **Appendix** | |
| **cluster 1 - reinforcement learning** **cluster 2 - agent foundations** | |
| **cluster 3 - language modeling** | |
| variable | |
| feature | |
| particular | |
| parameter | |
| train | |
| better | |
| take | |
| space | |
| demonstration | |
| use | |
| part | |
| exploration | |
| allow | |
| utility | |
| environment | |
| better | |
| trajectories | |
| planning | |
| set | |
| prediction | |
| question | |
| must | |
| parameter trainparticular better take feature | |
| approach | |
| neural networktrained robustness welloutput input | |
| approach | |
| want | |
| term | |
| game | |
| well | |
| many | |
| used | |
| observation | |
| number | |
| player | |
| fact | |
| safety | |
| able | |
| input | |
| often | |
| case | |
| wellwant | |
| still | |
| output | |
| going | |
| define | |
| robustness | |
| problems | |
| want | |
| environment utility | |
| number player | |
| makealgorithm | |
| lead | |
| trained | |
| exist | |
| task | |
| rather | |
| new | |
| good | |
| language | |
| system | |
| learn | |
| must | |
| methoduser rather | |
| user | |
| policyeven idea | |
| goal | |
| algorithm | |
| first settingeven see trained | |
| observation | |
| may | |
| timerewardfirstshow | |
| setting | |
| idea | |
| see | |
| consider | |
| many | |
| compute | |
| question | |
| classifier | |
| number player | |
| algorithm | |
| want | |
| definitionpreferencesetoutcome possible | |
| something | |
| second | |
| without | |
| even | |
| output | |
| usegood | |
| good | |
| every | |
| actually | |
| assume | |
| game | |
| author distribution | |
| might function much networkused result see | |
| learn robustness stillwellwant | |
| researchdatasetlanguagemay questionclassifier good | |
| performanceauthor distribution | |
| research | |
| mean | |
| particular | |
| time | |
| show | |
| whether | |
| real world | |
| value | |
| author | |
| much | |
| way | |
| data | |
| preferenceoutcome possible | |
| start | |
| better | |
| find general | |
| distribution | |
| mayable | |
| evaluate | |
| simple | |
| likely | |
| case | |
| learning | |
| possible | |
| require | |
| environmentexpert whether real world | |
| imitation learning | |
| outcome | |
| function | |
| much | |
| sentence | |
| state expertreward function | |
| different | |
| doesn | |
| used | |
| algorithmnew | |
| result | |
| see | |
| answer | |
| theory | |
| take | |
| expert | |
| definitioninformationoutcome considerpossibledoesn | |
| valuechangegood bit Thus output much | |
| definitioninformation consider | |
| information | |
| changegood | |
| need | |
| based | |
| function much used | |
| trainingmachine learningtwowaywordlearning | |
| imageseem | |
| humans make tasksamplefirst | |
| distribution | |
| used | |
| give | |
| least | |
| wayword | |
| bit Thus output much | |
| show | |
| utility function | |
| time | |
| point | |
| first | |
| robot two | |
| performance | |
| robot | |
| result | |
| machine learning | |
| single | |
| train | |
| functionlearninggive | |
| function | |
| learned | |
| objective | |
| whether | |
| don | |
| idea | |
| chooseseem | |
| reward | |
| well | |
| whether | |
| don | |
| Thus | |
| bit | |
| output | |
| much | |
| image | |
| two | |
| accuracy | |
| agents | |
| show | |
| humans | |
| give | |
| particular | |
| level | |
| instead | |
| approach | |
| behavior | |
| may | |
| seem | |
| Lettrue | |
| true | |
| might | |
| need | |
| often | |
| utility function | |
| belief | |
| simplepossible | |
| findtext | |
| possible | |
| policies | |
| learning algorithm | |
| humans | |
| sample | |
| framework | |
| say | |
| rather | |
| might | |
| say | |
| see | |
| possible text | |
| make samplefirstthing | |
| thinkset workusingchange | |
| thing | |
| informationIRL | |
| IRL | |
| learning algorithm play framework | |
| reinforcement learningfind Inverse Reinforcement control change | |
| counterfactual | |
| need | |
| mean | |
| find Inverse Reinforcement | |
| play | |
| control | |
| propose | |
| probability | |
| people | |
| seem simplepossiblefindtext | |
| GPThumans make samplefirstthing | |
| work | |
| approachesactionfeedback | |
| proof | |
| instead | |
| reason | |
| part | |
| map | |
| training | |
| given order | |
| preference | |
| change | |
| oracle | |
| paper | |
| new | |
| architecture | |
| humans make sample | |
| papertechniquepointarchitecture showlook set | |
| point | |
| provide | |
| think | |
| Note | |
| result | |
| set | |
| might | |
| information | |
| time | |
| given | |
| prediction examples | |
| change | |
| feedback | |
| approachesfeedbackimage trajectory | |
| learn | |
| policynow | |
| know | |
| defined | |
| show | |
| look | |
| system | |
| case | |
| make | |
| problems | |
| image | |
| algorithm | |
| without | |
| provide | |
| showlook set change provide | |
| method dataprediction examples | |
| order | |
| prior Note | |
| givenmeanresult | |
| technique | |
| two | |
| utility function | |
| least | |
| reasonpartpriormapnew Noteoraclepaper | |
| ### way | |
| look | |
| given | |
| goal | |
| system | |
| universe | |
| prior | |
| work | |
| proof | |
| instead | |
| look | |
| use | |
| world | |
| assumption | |
| author | |
| prediction | |
| different | |
| dynamic | |
| **cluster 4 - AI Governance** | |
| information whether impact first thinggroupwaypoint | |
| muchthinkproblems takeview humans field casegoaltime | |
| work | |
| researcheralignmentdevelopmenthelp worlddon discussion | |
| **cluster 5 - value learning** | |
| group | |
| way | |
| alignmentgiven | |
| whether | |
| thinkproblems view | |
| first | |
| thing | |
| case | |
| information | |
| whether | |
| impact | |
| AGI | |
| people | |
| something | |
| point | |
| given | |
| time | |
| well | |
| rather | |
| know | |
| make | |
| first | |
| problems | |
| much | |
| view | |
| take | |
| humans | |
| problems | |
| field | |
| time | |
| goal | |
| action | |
| consider | |
| understand aligned | |
| value | |
| work | |
| safety | |
| researcher | |
| discussion | |
| worlddon | |
| don | |
| don discussion | |
| mightrelated | |
| argument | |
| help | |
| related | |
| development | |
| topic | |
| information | |
| better | |
| see | |
| goal | |
| seem | |
| see | |
| useful | |
| take | |
| particular | |
| still | |
| need design | |
| need | |
| help | |
| much | |
| yearnew progress welllikely importantvaluemean goodrelated | |
| postslot use | |
| ###### researchprobably mayless many part | |
| peopleproject idea future technologyAGIthree | |
| lead | |
| paper now governancewant link reason | |
| givesystem two | |
| year | |
| developmenthelp | |
| see | |
| progress welllikely | |
| risk | |
| well | |
| using | |
| new progress welllikely | |
| value | |
| importantmean | |
| good | |
| now | |
| lot | |
| mean | |
| use | |
| look risk | |
| two | |
| usecasenow | |
| case | |
| part | |
| probablypeopleproject idea | |
| future | |
| technology | |
| brain | |
| many | |
| posts | |
| probably | |
| even | |
| AGIthree | |
| project | |
| idea | |
| may | |
| less | |
| three | |
| approachthingquestionargumentevenactually | |
| argument | |
| important | |
| concept | |
| behavior | |
| may | |
| lead | |
| don | |
| answer | |
| new | |
| argumentway | |
| good | |
| need | |
| thinking | |
| paper | |
| now | |
| governance | |
| governancewant | |
| reason | |
| link | |
| questionparticularchange hard | |
| different | |
| part | |
| world | |
| give | |
| idea | |
| different | |
| reward | |
| whether | |
| learn | |
| two | |
| algorithm | |
| particularchange hard | |
| change | |
| even | |
| link reason | |
| evensafetydifferent | |
| relevant | |
| Supplementary Figure 1: **Word frequency visualization for different clusters.** Wordcloud representation (word_cloud package in Python) of the most commonly used words | |
| in articles of the five identified clusters in Fig. 2. The following words occurred in all | |
| clusters very often and were thus removed from the wordcloud: "will", "post", "problem", | |
| "example", "one", "SEP", "AI", "agent", "human", "model", and "models". | |
| 17 | |
| Understanding AI alignment research: A Systematic Analysis | |
| Supplementary Figure 2: **Interactive embedding of AI alignment literature.** An interactive plot (plotly.com) of an UMAP projection of AI alignment research that displays | |
| the title of a selected article. | |
| 18 | |
| Understanding AI alignment research: A Systematic Analysis | |
| Supplementary Figure 3: **Summarization tool.** An early prototype of a summarization | |
| service for AI alignment research articles. We finetuned a 6B GPT-J language model [53,54] | |
| on the collected dataset and designed a prompt that produces a short summary of a provided AI alignment research article. | |
| 19 | |
| Understanding AI alignment research: A Systematic Analysis | |
| Supplementary Figure 4: **Prototype of semantic search engine.** After entering the | |
| URL of an Alignment Forum post (top left), the article is extracted (bottom left) and embedded with the Allen SPECTER model [41] . The resulting embedding is compared with | |
| [all embeddings with a vector database search service (Pinecone.io) to retrieve similar](https://www.pinecone.io/) | |
| articles (middle column). By clicking the "Explain" button on a search result, a query | |
| with the abstract of the original article and the search result is sent to the OpenAI API | |
| to generate an analysis of similarities and differences (right column). | |
| 20 | |