{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:11:37.210980Z" }, "title": "Emergent Structures and Training Dynamics in Large Language Models", "authors": [ { "first": "Ryan", "middle": [], "last": "Teehan", "suffix": "", "affiliation": {}, "email": "rsteehan@gmail.com" }, { "first": "Miruna", "middle": [], "last": "Clinciu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Oleg", "middle": [], "last": "Serikov", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Eliza", "middle": [], "last": "Szczechla", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Natasha", "middle": [], "last": "Seelam", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Shachar", "middle": [], "last": "Mirkin", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Aaron", "middle": [], "last": "Gokaslan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cornell University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Large language models have achieved success on a number of downstream tasks, particularly in a few and zero-shot manner. As a consequence, researchers have been investigating both the kind of information these networks learn and how such information can be encoded in the parameters of the model. We survey the literature on changes in the network during training, drawing from work outside of NLP when necessary, and on learned representations of linguistic features in large language models. We note in particular the lack of sufficient research on the emergence of functional unitssubsections of the network where related functions are grouped or organized-within large language models, and motivate future work that grounds the study of language models in an analysis of their changing internal structure during training time.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Large language models have achieved success on a number of downstream tasks, particularly in a few and zero-shot manner. As a consequence, researchers have been investigating both the kind of information these networks learn and how such information can be encoded in the parameters of the model. We survey the literature on changes in the network during training, drawing from work outside of NLP when necessary, and on learned representations of linguistic features in large language models. We note in particular the lack of sufficient research on the emergence of functional unitssubsections of the network where related functions are grouped or organized-within large language models, and motivate future work that grounds the study of language models in an analysis of their changing internal structure during training time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recent advances in self-supervised learning, distributed training, and architecture improvements have enabled training massive language models (Devlin et al., 2019; Brown et al., 2020; Radford et al., 2019; Ma et al., 2020; Liu et al., 2019) . As these models have grown larger, so has their performance and generalization to new tasks. Furthermore, these techniques have also shown substantial improvements in learning multilingual (Chen et al., 2020) and multimodal representations (Radford et al., 2021) . These large language models (LLMs) have advanced the state of the art in fewand zero-shot tasks (Radford et al., 2019; Brown et al., 2020; Radford et al., 2021) . However, the size of these models makes them difficult to evaluate, examine, and audit. What structures emerge from training these neural networks? What internal representations do these networks learn?", "cite_spans": [ { "start": 143, "end": 164, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF37" }, { "start": 165, "end": 184, "text": "Brown et al., 2020;", "ref_id": null }, { "start": 185, "end": 206, "text": "Radford et al., 2019;", "ref_id": "BIBREF82" }, { "start": 207, "end": 223, "text": "Ma et al., 2020;", "ref_id": "BIBREF67" }, { "start": 224, "end": 241, "text": "Liu et al., 2019)", "ref_id": "BIBREF63" }, { "start": 433, "end": 452, "text": "(Chen et al., 2020)", "ref_id": "BIBREF21" }, { "start": 484, "end": 506, "text": "(Radford et al., 2021)", "ref_id": null }, { "start": 605, "end": 627, "text": "(Radford et al., 2019;", "ref_id": "BIBREF82" }, { "start": 628, "end": 647, "text": "Brown et al., 2020;", "ref_id": null }, { "start": 648, "end": 669, "text": "Radford et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In part, this opacity is implicit in the models themselves. Many of the fascinating capabilities of LLMs are \"implicitly induced, not explicitly constructed\" emergent properties (Bommasani et al., 2021) . Emergent properties are those that result from the structural relations and interactions between a system's components (Ablowitz, 1939; Callebaut and Rasskin-Gutman, 2005) . One way of characterizing the emergence of useful properties from complexity is through self-organization, wherein complex systems come to develop ordered patterns from the interactions of their components (Gershenson et al., 2020) . Interactions between the parts of a system can produce complex global behavior, for example in the collective behavior of ants, flocking in birds (Cucker and Smale, 2007) , or in the brain and central nervous system (Dresp-Langley, 2020; Brown, 2013) . In the context of deep learning models, qualitatively different behavior has been observed during phase transitions in model size or training steps (Steinhardt, 2022) . Current research on understanding the generalization abilities of LLMs has largely focused on the degree to which they learn various linguistic features (e.g. syntax) that would support performance on diverse downstream tasks. Our goal instead is to motivate research that grounds the learning of these higher-level representations, and from there, LLMs generalization abilities, in the emergent structures that result from self-organization within the networks.", "cite_spans": [ { "start": 178, "end": 202, "text": "(Bommasani et al., 2021)", "ref_id": null }, { "start": 324, "end": 340, "text": "(Ablowitz, 1939;", "ref_id": "BIBREF0" }, { "start": 341, "end": 376, "text": "Callebaut and Rasskin-Gutman, 2005)", "ref_id": "BIBREF20" }, { "start": 585, "end": 610, "text": "(Gershenson et al., 2020)", "ref_id": "BIBREF43" }, { "start": 759, "end": 783, "text": "(Cucker and Smale, 2007)", "ref_id": "BIBREF31" }, { "start": 829, "end": 850, "text": "(Dresp-Langley, 2020;", "ref_id": "BIBREF38" }, { "start": 851, "end": 863, "text": "Brown, 2013)", "ref_id": "BIBREF16" }, { "start": 1014, "end": 1032, "text": "(Steinhardt, 2022)", "ref_id": "BIBREF100" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To analyze LLMs themselves, we survey current research on the following topics and identify gaps in the literature. First, we turn to the development of internal representations of important features of language (e.g. syntax). Second, we look at the structure of the network (neurons, weights, etc.), how it evolves over time, and the emergence of functional units therein. In each case, we include not only research related to trained models, but also the changes that result over training time (termed training dynamics). Most research has focused on the aforementioned internal representations and their connection to the downstream performance and generalization ability of LLMs, with only limited work on how the network structure changes over time and that change's connection to those representations. We aim to motivate research that not only applies work on the emergent structures within networks from outside of NLP to LLMs, but also develops a language-specific account of useful functional units that emerge in LLMs. Moreover, we identify methods for studying emergence and self-organization in complex systems with potential applications to analyzing LLM training dynamics and behavior. We conclude with a survey of explainability methods that allow researchers to connect structure with function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Linguistic Structure Representations A significant current area of research is dedicated to interpreting language models from a linguistic point of view. The motivation is to know to what extent models \"understand\" language, and more specifically, to what extent their generalizations over language agree with the generalizations about language described by linguistics. Following the hierarchy of language levels (morphology, syntax, discourse) (Dalrymple, 2001) , experiments in probing studies typically address models' proficiency on a certain level of language. This line of research typically comes down to analyzing how linguistic structures are represented in a model's knowledge. Such structures represent syntagmatic/paradigmatic mechanisms of language (how language units combine and alternate, respectively). It is believed (McCoy et al., 2020) that rediscovering these structures would help models get closer to humans performance on a variety of tasks.", "cite_spans": [ { "start": 446, "end": 463, "text": "(Dalrymple, 2001)", "ref_id": "BIBREF33" }, { "start": 836, "end": 856, "text": "(McCoy et al., 2020)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "Internal Representations", "sec_num": "2" }, { "text": "Probing Methods to Test for Linguistic Structure Probing tasks measure the linguistic awareness of a model's components, such as layers (Tenney et al., 2019) or groups of neurons (Durrani et al., 2020) , by training an auxiliary model, the probe, on annotated data. Datasets providing such linguistically annotated data are called probing datasets, and cover a wide variety of properties (parts of speech, parse trees, etc.). A high performance of a probe model on a linguistic task implies that the representation tested encodes the property of interest. Several studies using probing methods have reported high accuracy predictions in identi-fying the underlying linguistic structure (Belinkov et al., 2017a,b; Peters et al., 2018; Tenney et al., 2019; Conneau et al., 2018; Zhang and Bowman, 2018; Alain and Bengio, 2017; Hewitt and Manning, 2019; Hewitt and Liang, 2019) .", "cite_spans": [ { "start": 136, "end": 157, "text": "(Tenney et al., 2019)", "ref_id": "BIBREF103" }, { "start": 179, "end": 201, "text": "(Durrani et al., 2020)", "ref_id": "BIBREF41" }, { "start": 686, "end": 712, "text": "(Belinkov et al., 2017a,b;", "ref_id": null }, { "start": 713, "end": 733, "text": "Peters et al., 2018;", "ref_id": "BIBREF74" }, { "start": 734, "end": 754, "text": "Tenney et al., 2019;", "ref_id": "BIBREF103" }, { "start": 755, "end": 776, "text": "Conneau et al., 2018;", "ref_id": "BIBREF26" }, { "start": 777, "end": 800, "text": "Zhang and Bowman, 2018;", "ref_id": "BIBREF115" }, { "start": 801, "end": 824, "text": "Alain and Bengio, 2017;", "ref_id": "BIBREF4" }, { "start": 825, "end": 850, "text": "Hewitt and Manning, 2019;", "ref_id": "BIBREF49" }, { "start": 851, "end": 874, "text": "Hewitt and Liang, 2019)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Internal Representations", "sec_num": "2" }, { "text": "However, high performance may have confounding factors; there is uncertainty on whether the probing tasks properly test if representations actually encode linguistic structure and on how to interpret the results of probes (Hewitt and Liang, 2019; Zhang and Bowman, 2018; Voita and Titov, 2020; Pimentel et al., 2020b) . Toward that end, the following section reviews several probing approaches in the context of language models, and the evaluation criteria used to determine the proficiency of a probe.", "cite_spans": [ { "start": 222, "end": 246, "text": "(Hewitt and Liang, 2019;", "ref_id": "BIBREF48" }, { "start": 247, "end": 270, "text": "Zhang and Bowman, 2018;", "ref_id": "BIBREF115" }, { "start": 271, "end": 293, "text": "Voita and Titov, 2020;", "ref_id": "BIBREF105" }, { "start": 294, "end": 317, "text": "Pimentel et al., 2020b)", "ref_id": "BIBREF79" } ], "ref_spans": [], "eq_spans": [], "section": "Internal Representations", "sec_num": "2" }, { "text": "Grammatical and Semantic Probing Given the excellent performance of pre-trained representations on numerous linguistic tasks (Kitaev and Klein, 2018; Strubell et al., 2018; , several studies have explored how semantic and grammatical knowledge are encoded within language models. Syntactic and morphological probing encompasses tasks that identify grammatical structure underlying the vector representations within pre-trained models, whereas semantic probing tasks investigate what meaning is conveyed within the representation.", "cite_spans": [ { "start": 125, "end": 149, "text": "(Kitaev and Klein, 2018;", "ref_id": "BIBREF57" }, { "start": 150, "end": 172, "text": "Strubell et al., 2018;", "ref_id": "BIBREF101" } ], "ref_spans": [], "eq_spans": [], "section": "Internal Representations", "sec_num": "2" }, { "text": "Earlier work using part of speech (POS) and morphological tagging (Belinkov et al., 2017a) indicated that syntactic information may be encoded in layers of neural models. More recently, investigations have considered whether models learn to embed entire parse trees in their representations. In Hewitt and Manning (2019) , the authors outline structural probing as a method to identify hierarchical, tree-like, structures from vector representations of language via the syntactic distance between embeddings. Their results across several large language models suggested that Transformer model encodings possess some hierarchical linguistic structure.", "cite_spans": [ { "start": 66, "end": 90, "text": "(Belinkov et al., 2017a)", "ref_id": "BIBREF10" }, { "start": 295, "end": 320, "text": "Hewitt and Manning (2019)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Internal Representations", "sec_num": "2" }, { "text": "Several studies conducted probing experiments in multilingual settings. highlighted syntactic generalizations in multilingual language models via structured probing, and\u015eahin et al. (2020) propose a framework for multilingual morpho-syntactic probing, with 15 probing tasks for multiple languages, showing that, while crosslingual typological regularities can be found with probing, probing dataset properties strongly impact the results (see Section 2.2 for more details about multilingual models).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Internal Representations", "sec_num": "2" }, { "text": "Probes have also been used to measure semantic information within language model representations. The authors of Belinkov et al. (2017b) posed a semantic-class labeling task and found that higher layers of a model tend to perform better at semantic tagging. Similarly, semantic labeling tasks have been used to indicate that contextualized representations may encode multiple meanings within a single vector (Yaghoobzadeh et al., 2019) . Contrarily, edge probing, developed by Tenney et al. (2019) , implied that contextualized embeddings show larger gains on syntactic tasks as opposed to semantic ones (with only modest performance gains against non-contextualized baselines). There is no general evidence on how exactly language levels are distributed across model layers (Rogers et al., 2020) .", "cite_spans": [ { "start": 113, "end": 136, "text": "Belinkov et al. (2017b)", "ref_id": "BIBREF12" }, { "start": 408, "end": 435, "text": "(Yaghoobzadeh et al., 2019)", "ref_id": "BIBREF112" }, { "start": 477, "end": 497, "text": "Tenney et al. (2019)", "ref_id": "BIBREF103" }, { "start": 775, "end": 796, "text": "(Rogers et al., 2020)", "ref_id": "BIBREF84" } ], "ref_spans": [], "eq_spans": [], "section": "Internal Representations", "sec_num": "2" }, { "text": "Information Theoretic Probing Informationtheoretic probing characterize tasks as a way of estimating the mutual information between an internal representation and the linguistic property of interest (Pimentel et al., 2020b; Voita and Titov, 2020; Pimentel et al., 2020a) . Many of these approaches highlight the need to formalize the \"effort\" required in encoding a linguistic property, often via some form of a control function (Pimentel et al., 2020b) . Counter-intuitively, work from Pimentel et al. (2020b) suggest that the \"best\" probes are ones that always perform highest on the task; their argument is that \"learning\" the task is equivalent to encoding the linguistic property in the initial representations. They provide approximations to calculate information gain, finding that BERT models contain only 12% more information than non-contextualized baselines.", "cite_spans": [ { "start": 199, "end": 223, "text": "(Pimentel et al., 2020b;", "ref_id": "BIBREF79" }, { "start": 224, "end": 246, "text": "Voita and Titov, 2020;", "ref_id": "BIBREF105" }, { "start": 247, "end": 270, "text": "Pimentel et al., 2020a)", "ref_id": "BIBREF78" }, { "start": 429, "end": 453, "text": "(Pimentel et al., 2020b)", "ref_id": "BIBREF79" }, { "start": 487, "end": 510, "text": "Pimentel et al. (2020b)", "ref_id": "BIBREF79" } ], "ref_spans": [], "eq_spans": [], "section": "Internal Representations", "sec_num": "2" }, { "text": "Criticisms of accuracy-based performance metrics have argued that these methods are sensitive to structure, randomization, and hyperparameter selection (Voita and Titov, 2020; Hewitt and Liang, 2019; Zhang and Bowman, 2018; Pimentel et al., 2020b) . As an alternative, the minimum description length (MDL) offers an information theoretic view on probe quality (Voita and Titov, 2020) . Formally, it describes the \"minimum number of bits required to transmit labels, knowing the representations\", where better probes are those with smaller codelengths, as they suggest the information available in the representation is sufficiently accessible to solve the task. Prior studies have shown the MDL metric is robust and resilient to randomness (Voita and Titov, 2020) . In comparison to the original POS tagging of Hewitt and Liang (2019) , the MDL metric consistently distinguishes between the linguistic versus the control tasks across differences in hyperparameters and random seeds. Similarly, following Zhang and Bowman (2018) , evaluation using MDL revealed longer codelengths for randomly initialized models as opposed to pre-trained ones.", "cite_spans": [ { "start": 152, "end": 175, "text": "(Voita and Titov, 2020;", "ref_id": "BIBREF105" }, { "start": 176, "end": 199, "text": "Hewitt and Liang, 2019;", "ref_id": "BIBREF48" }, { "start": 200, "end": 223, "text": "Zhang and Bowman, 2018;", "ref_id": "BIBREF115" }, { "start": 224, "end": 247, "text": "Pimentel et al., 2020b)", "ref_id": "BIBREF79" }, { "start": 360, "end": 383, "text": "(Voita and Titov, 2020)", "ref_id": "BIBREF105" }, { "start": 740, "end": 763, "text": "(Voita and Titov, 2020)", "ref_id": "BIBREF105" }, { "start": 811, "end": 834, "text": "Hewitt and Liang (2019)", "ref_id": "BIBREF48" }, { "start": 1004, "end": 1027, "text": "Zhang and Bowman (2018)", "ref_id": "BIBREF115" } ], "ref_spans": [], "eq_spans": [], "section": "Internal Representations", "sec_num": "2" }, { "text": "Several studies have highlighted the need for interpretable performance scores on probes (Belinkov et al., 2017b; Peters et al., 2018; Tenney et al., 2019; Conneau et al., 2018; Zhang and Bowman, 2018; Alain and Bengio, 2017; Hall Maudslay and Cotterell, 2021) . Two common themes have emerged for evaluating the proficiency of a probe: selectivity through control tasks and high informatic overlap via control functions (Hewitt and Liang, 2019; Pimentel et al., 2020b; Zhu and Rudzicz, 2020) . Recent work suggests that both approaches yield comparable results empirically with similar error terms theoretically (Zhu and Rudzicz, 2020) .", "cite_spans": [ { "start": 89, "end": 113, "text": "(Belinkov et al., 2017b;", "ref_id": "BIBREF12" }, { "start": 114, "end": 134, "text": "Peters et al., 2018;", "ref_id": "BIBREF74" }, { "start": 135, "end": 155, "text": "Tenney et al., 2019;", "ref_id": "BIBREF103" }, { "start": 156, "end": 177, "text": "Conneau et al., 2018;", "ref_id": "BIBREF26" }, { "start": 178, "end": 201, "text": "Zhang and Bowman, 2018;", "ref_id": "BIBREF115" }, { "start": 202, "end": 225, "text": "Alain and Bengio, 2017;", "ref_id": "BIBREF4" }, { "start": 226, "end": 260, "text": "Hall Maudslay and Cotterell, 2021)", "ref_id": "BIBREF45" }, { "start": 421, "end": 445, "text": "(Hewitt and Liang, 2019;", "ref_id": "BIBREF48" }, { "start": 446, "end": 469, "text": "Pimentel et al., 2020b;", "ref_id": "BIBREF79" }, { "start": 470, "end": 492, "text": "Zhu and Rudzicz, 2020)", "ref_id": "BIBREF116" }, { "start": 613, "end": 636, "text": "(Zhu and Rudzicz, 2020)", "ref_id": "BIBREF116" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating Probing Performance", "sec_num": "2.1" }, { "text": "Control Tasks Selectivity is the trade-off between complexity and performance of the linguistic task. A \"good\" probe refers to one that performs highly on linguistic tasks, but poorly on control tasks, thus limiting the ability for a probe to \"memorize\" the task (Hewitt and Liang, 2019) .", "cite_spans": [ { "start": 263, "end": 287, "text": "(Hewitt and Liang, 2019)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating Probing Performance", "sec_num": "2.1" }, { "text": "Arguments preferring \"simpler\" probes claim that these models should find \"accessible\" information within the representations (Shi et al., 2016) . The simplest probes employ linear functions, yet more complex probes have been commonly used, including multi-layer perceptrons (MLP) or kernel methods (Belinkov et al., 2017a; Conneau et al., 2018; White et al., 2021; Adi et al., 2017) , suggesting that some linguistic properties may be encoded non-linearly. Linear functions and MLPs are still commonly in use (Tenney et al., 2019) .", "cite_spans": [ { "start": 126, "end": 144, "text": "(Shi et al., 2016)", "ref_id": "BIBREF94" }, { "start": 299, "end": 323, "text": "(Belinkov et al., 2017a;", "ref_id": "BIBREF10" }, { "start": 324, "end": 345, "text": "Conneau et al., 2018;", "ref_id": "BIBREF26" }, { "start": 346, "end": 365, "text": "White et al., 2021;", "ref_id": "BIBREF108" }, { "start": 366, "end": 383, "text": "Adi et al., 2017)", "ref_id": "BIBREF1" }, { "start": 510, "end": 531, "text": "(Tenney et al., 2019)", "ref_id": "BIBREF103" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating Probing Performance", "sec_num": "2.1" }, { "text": "Prior works within the probing literature have also explored how the size of training data can influence the performance of the probe (Zhang and Bowman, 2018; Hewitt and Liang, 2019) . In an investigation considering probes of pre-trained language models and an untrained baseline on two syntactic tasks: POS tagging and Combinatorial Categorical Grammar (CCG) super-tagging (Hockenmaier and Steedman, 2007) , probes with an un-trained baseline model could surprisingly attain high performance compared to pre-trained models (Zhang and Bowman, 2018) . However, the probe performance decreased dramatically when reducing the amount of available training data when compared to the pre-trained models. This suggested trained encoders captured enough syntactic information, beyond simple word-identities, which enabled these representations to achieve high performance on the linguistic tasks.", "cite_spans": [ { "start": 134, "end": 158, "text": "(Zhang and Bowman, 2018;", "ref_id": "BIBREF115" }, { "start": 159, "end": 182, "text": "Hewitt and Liang, 2019)", "ref_id": "BIBREF48" }, { "start": 375, "end": 407, "text": "(Hockenmaier and Steedman, 2007)", "ref_id": "BIBREF51" }, { "start": 525, "end": 549, "text": "(Zhang and Bowman, 2018)", "ref_id": "BIBREF115" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating Probing Performance", "sec_num": "2.1" }, { "text": "An extensive study on selectivity proposed several control tasks for POS tagging and dependency edge prediction (Hewitt and Liang, 2019) . Across an array of probe architectures (linear, MLP-1, MLP-2) and hyperparameters, this investigation considered the effect of the hidden state dimensionality (size), number of training examples, regularization, and early stopping. The most effective probes were those with constrained hidden dimensions, yielding the most selective probes.", "cite_spans": [ { "start": 112, "end": 136, "text": "(Hewitt and Liang, 2019)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating Probing Performance", "sec_num": "2.1" }, { "text": "Control Functions Control functions compare the mutual information against a property of interest and the representation before and after the function is applied. The objective is used to measure the information gain of the representation. In Pimentel et al. (2020b) , control functions were used to compare BERT contextualized models against FastText (Bojanowski et al., 2017 ) and a one-hot encoding on POS tagging. Curiously, their results suggested that BERT models only marginally improved information gain against these simpler baselines.", "cite_spans": [ { "start": 243, "end": 266, "text": "Pimentel et al. (2020b)", "ref_id": "BIBREF79" }, { "start": 352, "end": 376, "text": "(Bojanowski et al., 2017", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating Probing Performance", "sec_num": "2.1" }, { "text": "Multilingual large language models, such as multilingual BERT (mBERT) (Devlin et al., 2019; Devlin, 2018) XLM (Conneau and Lample, 2019) or XLM-R (Conneau et al., 2020a) have shown impressive results when used for (zero-shot) crosslingual transfer; that is, when the pre-trained multilingual language model is used as the basis for a task-specific model that is applied to a language in which it was not trained for. Their efficiency was proven in a wide variety of tasks, such as sentiment analysis, natural language inference, and question answering, to name a few.", "cite_spans": [ { "start": 70, "end": 91, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF37" }, { "start": 92, "end": 105, "text": "Devlin, 2018)", "ref_id": "BIBREF36" }, { "start": 110, "end": 136, "text": "(Conneau and Lample, 2019)", "ref_id": "BIBREF27" }, { "start": 146, "end": 169, "text": "(Conneau et al., 2020a)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Emerging Multilingual Structures", "sec_num": "2.2" }, { "text": "Prior to the immense popularity of Transformerbased models, two approaches of using word embeddings for cross-lingual tasks have shown promising results. In the first, representations are learned separately from individual languages and then aligned to a shared space, thus producing cross-lingual word embeddings (Ruder et al., 2019) , that in turn, are used on the target language. In the second, multilingual representations are learned by jointly training over multiple languages. Artetxe and Schwenk (2019) , for example, trained a BiL-STM over 93 languages using parallel corpora, producing \"universal\" embeddings that were successfully used in various tasks.", "cite_spans": [ { "start": 314, "end": 334, "text": "(Ruder et al., 2019)", "ref_id": "BIBREF86" }, { "start": 485, "end": 511, "text": "Artetxe and Schwenk (2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Emerging Multilingual Structures", "sec_num": "2.2" }, { "text": "The same two approaches are being explored with large language models. In Conneau et al. (2020b) , monolingual BERT models that were trained separately for different languages produced similar (easily-aligned) representations. Pires et al. (2019) and Vuli\u0107 et al. (2020) further showed -as expected -that the similarity depends on the typological distance between the languages. Universal language-agnostic embeddings also emerge when training multilingual models, even when no explicit connection (such as parallel corpora or bilingual dictionaries) between the languages is used during training, such as in the case of mBERT.", "cite_spans": [ { "start": 74, "end": 96, "text": "Conneau et al. (2020b)", "ref_id": "BIBREF29" }, { "start": 227, "end": 246, "text": "Pires et al. (2019)", "ref_id": "BIBREF80" }, { "start": 251, "end": 270, "text": "Vuli\u0107 et al. (2020)", "ref_id": "BIBREF107" } ], "ref_spans": [], "eq_spans": [], "section": "Emerging Multilingual Structures", "sec_num": "2.2" }, { "text": "Multiple works looked into the factors that contribute to the successful transfer. These include domain and language similarity, shared parameters, and perhaps the most straightforward factor: common (sub-) words between the languages (Wu and Dredze, 2019; Conneau et al., 2020b; Pires et al., 2019) . Interestingly, Conneau et al. (2020b) and K et al. (2020) showed that the universal representations do not heavily depend on shared vocabulary; instead, multilinguality emerges directly from the fact that parameters are shared in training, from the structure of the network, and is affected by common characteristics of the languages, such as word order (Dufter and Sch\u00fctze, 2020) . Pires et al. (2019) discovered that mBERT can also successfully transfer between languages with different scripts, and that generalization goes beyond the lexical level, and found that syntactic features representations in mBERT overlap between languages. Still, Ahmad et al. (2021) have shown that augmenting mBERT with syntactic information can improve cross-lingual transfer performance.", "cite_spans": [ { "start": 235, "end": 256, "text": "(Wu and Dredze, 2019;", "ref_id": "BIBREF110" }, { "start": 257, "end": 279, "text": "Conneau et al., 2020b;", "ref_id": "BIBREF29" }, { "start": 280, "end": 299, "text": "Pires et al., 2019)", "ref_id": "BIBREF80" }, { "start": 317, "end": 339, "text": "Conneau et al. (2020b)", "ref_id": "BIBREF29" }, { "start": 344, "end": 359, "text": "K et al. (2020)", "ref_id": null }, { "start": 656, "end": 682, "text": "(Dufter and Sch\u00fctze, 2020)", "ref_id": "BIBREF39" }, { "start": 685, "end": 704, "text": "Pires et al. (2019)", "ref_id": "BIBREF80" }, { "start": 948, "end": 967, "text": "Ahmad et al. (2021)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Emerging Multilingual Structures", "sec_num": "2.2" }, { "text": "The size of each language's corpus in the language model's training set has been shown to be decisive for transfer to that language. Thus, lowresource languages often benefit more from the joint training (Wu and Dredze, 2020) , while languages with abundant resources often achieve better performance when trained on their own (Nozza et al., 2020; Lewis et al., 2020) .", "cite_spans": [ { "start": 204, "end": 225, "text": "(Wu and Dredze, 2020)", "ref_id": "BIBREF111" }, { "start": 327, "end": 347, "text": "(Nozza et al., 2020;", "ref_id": "BIBREF71" }, { "start": 348, "end": 367, "text": "Lewis et al., 2020)", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Emerging Multilingual Structures", "sec_num": "2.2" }, { "text": "Training dynamics is an emerging field of research, promising to improve our understanding of knowledge acquisition in neural networks and offering insights into the utility of pre-trained models and embedded representations for downstream tasks. Most studies of Transformers (e.g. RoBERTa (Zhuang et al., 2021) ) and LSTMs (Hochreiter and Schmidhuber, 1997) agree that models acquire linguistic knowledge early in the learning process. Local syntactic information, such as parts of speech, is learned earlier than information encoding long-distance dependencies (e.g. topic) (Liu et al., 2021; Saphra, 2021) . Exploration of AL-BERT (Lan et al., 2019) and LSTM-based networks reveals different learning patterns for function and content words with more fine-grained distinctions within these categories including part of speech and verb form (Saphra, 2021; Chiang et al., 2020) .", "cite_spans": [ { "start": 290, "end": 311, "text": "(Zhuang et al., 2021)", "ref_id": "BIBREF117" }, { "start": 324, "end": 358, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF50" }, { "start": 576, "end": 594, "text": "(Liu et al., 2021;", "ref_id": "BIBREF117" }, { "start": 595, "end": 608, "text": "Saphra, 2021)", "ref_id": "BIBREF89" }, { "start": 634, "end": 652, "text": "(Lan et al., 2019)", "ref_id": "BIBREF58" }, { "start": 843, "end": 857, "text": "(Saphra, 2021;", "ref_id": "BIBREF89" }, { "start": 858, "end": 878, "text": "Chiang et al., 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Training Dynamics of Internal Representation Development", "sec_num": "2.3" }, { "text": "Differences in learning trajectory were also observed between layers. In LSTMs, recurrent layers become more task-independent over the course of training, while embeddings become more taskspecific (Saphra, 2021) . In Transformer-based architectures, i.e.: ALBERT and ELECTRA, Chiang et al. (2020) observe differences in performance patterns between the top and last layers. Similarly to other areas of research in NLP, most of the literature on training dynamics concentrate on English-language models. Another possible direction for future work is extending studies conducted on LSTMs to more widely used Transformers.", "cite_spans": [ { "start": 197, "end": 211, "text": "(Saphra, 2021)", "ref_id": "BIBREF89" } ], "ref_spans": [], "eq_spans": [], "section": "Training Dynamics of Internal Representation Development", "sec_num": "2.3" }, { "text": "Recent research has complicated the picture of grammar learning presented in Sections 2, 2.2, and 2.3. Specifically, there have been two separate but related types of critique leveled at probing and grammar learning. First, specific to probing, researchers question whether probes really identify linguistic representations at all. Secondly, and more fundamentally, it is unclear to what degree language models even learn grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critique of Testing Methods", "sec_num": "2.4" }, { "text": "Hall Maudslay and Cotterell (2021) suggest that semantic \"cues\" may contaminate syntax probes, making it difficult to evaluate their scores. By employing \"Jabberwocky probing\", where pseudowords with no lexical meaning replace the original components of the sentence in a way that preserves grammar, the authors discovered that performance of syntactic probes considerably dropped for large language models, calling into question whether syntactic probes actually isolate syntactic knowledge withing language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critique of Testing Methods", "sec_num": "2.4" }, { "text": "A more fundamental issue for syntax learning in language models has been their performance when trained on perturbed or permuted data. Sinha et al. (2021) use a variety of word order permutations that preserve distributional information to isolate whether what language models learn is actually syntax. Word order has been assumed to be important not only for natural language understanding by humans but also by language models, particularly for learning syntax. Surprisingly then, word order appears to have less influence than one would expect on the downstream performance of language models and their performance on probing tasks. In part, the authors note that some syntax information can be acquired during fine-tuning to sufficiently answer tasks that require it. Moreover, in the context of syntax probes, the authors note that \"while natural word order is useful for at least some probing tasks, the distributional prior of randomized models alone is enough to achieve a reasonably high accuracy on syntax sensitive probing\". Furthermore, the results distinguish between parametric and non-parametric probes, where performance on the latter using randomization models degrades significantly. This degradation provides evidence that non-parametric probes are able to test for syntax learning in ways that parametric probes cannot. Similarly, O'Connor and Andreas (2021) use syntax-level perturbations and ablations to conclude that the information in context windows most useful to language models are local ordering statistics and content words, e.g. nouns, verbs, adverbs, and adjectives. In other words, it does not appear that language models make use of syntactic or other structural information in the context window.", "cite_spans": [ { "start": 135, "end": 154, "text": "Sinha et al. (2021)", "ref_id": "BIBREF97" } ], "ref_spans": [], "eq_spans": [], "section": "Critique of Testing Methods", "sec_num": "2.4" }, { "text": "Despite recent probing studies providing a closer look at how linguistic structures are distributed in language models, it is an open question to what extent this knowledge acquisition differs from that of humans. While grammatical structures tend to be learned much faster than downstream knowledge (Conneau et al., 2018) , there is still room for the study of more specific questions, such as whether models require more time to acquire the grammar of polysynthetic languages, as has been reported for humans (Kelly et al., 2014) .", "cite_spans": [ { "start": 300, "end": 322, "text": "(Conneau et al., 2018)", "ref_id": "BIBREF26" }, { "start": 511, "end": 531, "text": "(Kelly et al., 2014)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Further Research", "sec_num": "2.5" }, { "text": "Another remaining open question is whether linguistic structure knowledge can be transferred between models with the neurons initialization mechanism (Durrani et al., 2021) . While rough re-use of neurons is proven to be helpful in model initialization (Sanh et al., 2019) , for instance, such neuron \"surgery\" would potentially lead to even quicker acquisition of grammatical knowledge.", "cite_spans": [ { "start": 150, "end": 172, "text": "(Durrani et al., 2021)", "ref_id": "BIBREF40" }, { "start": 253, "end": 272, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF88" } ], "ref_spans": [], "eq_spans": [], "section": "Further Research", "sec_num": "2.5" }, { "text": "Generally speaking, the performance of multilingual models is inferior to that of monolingual ones, especially when enough resources are available. Yet, high-quality multilingual models remain a desired objective that can particularly benefit lowresource languages. Further understanding the factors that enable learning language-independent representations is key for developing better multilingual training or cross-lingual fine-tuning strategies, especially for transfer between less similar language pairs. A particularly interesting question is whether some tasks require more language-specific adaptation, because, for instance, they depend on linguistic information that is currently not generalized well enough in multilingual LLMs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further Research", "sec_num": "2.5" }, { "text": "3 Self-Organization and the Emergent Structure of Networks", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further Research", "sec_num": "2.5" }, { "text": "Inspired by the architecture of biological neural networks (BNNs) and their adaptability to various tasks, where neurons and circuits are capable of self-organization, many researchers have investigated how Artificial Neural Networks (ANNs) can be seen as emergent structures, where interpretability of an ANN's parameters can help us to inspect their functional modularity. Broadly, researchers have approached this by identifying patterns in the weights or neurons especially through subgraphs of the network. Branch specialization is the organization of branches -or \"sequences of layers which temporarily don't have access to 'parallel' information which is still passed to later layers\" (Voss et al., 2021) -of the network into functional units, across different architectures and tasks Bunel et al., 2020; Voss et al., 2021; R\u00f6ssig and Petkovic, 2021) . It is somewhat similar to how neurons are connected by synapses, forming small functional units called neural circuits that can be specialized for specific tasks, such as to \"medi-ate reflexes, process sensory information, generate locomotion and mediate learning and memory\" (Byrne et al., 2012; Luo, 2021) . In their work on AlexNet, Voss et al. (2021) provided initial evidence of self-organization of neurons and circuits (subgraphs) into functional units in a neural network. This self-organized emergent structure is consistent \"across different architectures and tasks\". A look at evolving neural structures gives another perspective. Inspired by neural architecture search (NAS), So et al. (2019) presented \"a first neural architecture search conducted to find improved feedforward sequence models\", where the search space contains five branch-level search fields. Recently, So et al. (2021) introduced Primer (PRIMitives searched TransformER), which can add improvements in the pre-training and one-shot downstream task transfer regime. However, branches are used just for the initialized multi-head attention.", "cite_spans": [ { "start": 692, "end": 711, "text": "(Voss et al., 2021)", "ref_id": null }, { "start": 792, "end": 811, "text": "Bunel et al., 2020;", "ref_id": null }, { "start": 812, "end": 830, "text": "Voss et al., 2021;", "ref_id": null }, { "start": 831, "end": 857, "text": "R\u00f6ssig and Petkovic, 2021)", "ref_id": "BIBREF85" }, { "start": 1136, "end": 1156, "text": "(Byrne et al., 2012;", "ref_id": "BIBREF19" }, { "start": 1157, "end": 1167, "text": "Luo, 2021)", "ref_id": "BIBREF66" }, { "start": 1196, "end": 1214, "text": "Voss et al. (2021)", "ref_id": null }, { "start": 1548, "end": 1564, "text": "So et al. (2019)", "ref_id": "BIBREF98" }, { "start": 1743, "end": 1759, "text": "So et al. (2021)", "ref_id": "BIBREF99" } ], "ref_spans": [], "eq_spans": [], "section": "Network Structure", "sec_num": "3.1" }, { "text": "Weight banding is the uniformity in the organization of the weights in a final layer. In neural networks, weights are parameters that can transform the input data between the network's hidden layers. Weight banding resembles another biological phenomenon when a neuron multiplies each input with a synaptic weight, which is represented as a number that highlights the importance assigned to that input. The weighted inputs are summed up in what represents the neuron's output (Iyer et al., 2013) . Petrov et al. (2021) note that many vision models display a uniform pattern in their final layer. They investigate the nature of this structural phenomenon, connecting it ultimately to architectural choices in the network and noting that weight banding can serve as a method of preserving spatial information.", "cite_spans": [ { "start": 476, "end": 495, "text": "(Iyer et al., 2013)", "ref_id": "BIBREF53" }, { "start": 498, "end": 518, "text": "Petrov et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Network Structure", "sec_num": "3.1" }, { "text": "Clustering is the grouping of neurons or subnetworks into units that can be used for specific tasks . Starting from the fact that modular systems allow us to have a better understanding of a system if we can inspect the function of individual modules, different clustering methods for neural networks were proposed. designed a modular neural network based on feature clustering to decompose features into clusters with each module processing different features. These modules work in parallel for a singular task. proposed a spectral clustering algorithm for decomposition of trained networks into clusters, finding that networks can have some sense of modularity and suggested further work related to clusterability in various domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Network Structure", "sec_num": "3.1" }, { "text": "Modularity focuses on the reusability of subnetworks for multiple tasks (Happel and Murre, 1994; Shukla et al., 2010; Csord\u00e1s et al., 2021) . In Csord\u00e1s et al. (2021) neural networks trained on algorithmic tasks appear to fail to learn general, modular, compositional algorithms, and require specific subset weights to handle a particular combination of the input tokens. With these findings, Csord\u00e1s et al. (2021) suggest further research about \"function dependent weight sharing in the neural networks\". Reusable multi-task subnetworks may also be discovered via Neural Architecture Search (NAS) methods (Pham et al., 2018) . Pasunuru and Bansal (2019) leverage a technique called multitask architecture search (MAS) to find multi-task cell structures in RNNs, capable of generalization to unseen tasks.", "cite_spans": [ { "start": 72, "end": 96, "text": "(Happel and Murre, 1994;", "ref_id": "BIBREF46" }, { "start": 97, "end": 117, "text": "Shukla et al., 2010;", "ref_id": null }, { "start": 118, "end": 139, "text": "Csord\u00e1s et al., 2021)", "ref_id": "BIBREF30" }, { "start": 145, "end": 166, "text": "Csord\u00e1s et al. (2021)", "ref_id": "BIBREF30" }, { "start": 393, "end": 414, "text": "Csord\u00e1s et al. (2021)", "ref_id": "BIBREF30" }, { "start": 606, "end": 625, "text": "(Pham et al., 2018)", "ref_id": "BIBREF76" }, { "start": 628, "end": 654, "text": "Pasunuru and Bansal (2019)", "ref_id": "BIBREF73" } ], "ref_spans": [], "eq_spans": [], "section": "Network Structure", "sec_num": "3.1" }, { "text": "Understanding the change in network structure over time is equally as important as identifying structure in trained models. Here, the focus is on how the parameters of the model change over the course of training, which can give insight into the types of inductive biases that develop and shed light on the nature of LLMs' abilities to generalize. The most recent work covering this in the context of LLMs focuses on parameter norm growth, which refers to the growth of the \u2113 2 norm during training time. According to Merrill et al. (2021) , neural networks learn successfully due to inductive biases introduced during training. Norm growth induces saturation in Transformer models, which reduces the attention heads to \"generalized hard attention\". The authors find that computations for argmax and mean are reducible to saturated attention, which partially explains why saturated Transformer models can learn counter languages, a kind of formal language, and may play a broader role in explaining their generalization abilities.", "cite_spans": [ { "start": 518, "end": 539, "text": "Merrill et al. (2021)", "ref_id": "BIBREF70" } ], "ref_spans": [], "eq_spans": [], "section": "Training Dynamics of Network Changes", "sec_num": "3.2" }, { "text": "As we have noted, most of the work on network structure is currently outside of NLP, either dealing with general ANNs or specific to Computer Vision with AlexNet and general convolutional networks trained on ImageNet (Voss et al., 2021; Petrov et al., 2021) . This work should be replicated in the context of LLMs to test for the existence of languagespecific functional units and, more generally, determine whether there are internal network structures that support the learned representations we discuss in Section 2. Likewise, since this research is still in its infancy, it is focused on simple emergent structures. Future research can incorporate higher-order emergent structures (Baas, 2000) , new methods of structure detection in networks (Aktas et al., 2019) , and even detection of structures whose form is not explicitly specified (Shalizi et al., 2006) .", "cite_spans": [ { "start": 217, "end": 236, "text": "(Voss et al., 2021;", "ref_id": null }, { "start": 237, "end": 257, "text": "Petrov et al., 2021)", "ref_id": null }, { "start": 685, "end": 697, "text": "(Baas, 2000)", "ref_id": "BIBREF6" }, { "start": 747, "end": 767, "text": "(Aktas et al., 2019)", "ref_id": "BIBREF3" }, { "start": 842, "end": 864, "text": "(Shalizi et al., 2006)", "ref_id": "BIBREF92" } ], "ref_spans": [], "eq_spans": [], "section": "Further Research", "sec_num": "3.3" }, { "text": "Additionally, by viewing the neural networks in question as time-evolving complex systems we can leverage older research on self-organization that has yet to be applied to understanding LLMs. In particular, Ball et al. (2010) provide a method for quantifying self-organization based on persistent mutual information. Likewise, Shalizi et al. (2004) ground self-organization in information theory and Shalizi (2003) extends this method to a general class of undirected graphs. Methods such as these can be used to identify and quantify selforganization in LLMs and better understand their emergent behavior.", "cite_spans": [ { "start": 207, "end": 225, "text": "Ball et al. (2010)", "ref_id": null }, { "start": 317, "end": 348, "text": "Likewise, Shalizi et al. (2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Further Research", "sec_num": "3.3" }, { "text": "Explainable AI (XAI)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Connecting Structure to Function:", "sec_num": "4" }, { "text": "The rapid increase in the adoption of AI models in recent years and their growing impact on human lives created a need for techniques that offer insight into the models internal operations. Since attention-based models (Vaswani et al., 2017) have become state-of-the-art tools in NLP, there have been numerous attempts to provide some understanding of their predictions by visualizing the attention layer. However, these approaches have been criticized for their inability to produce meaningful and coherent interpretations (Wiegreffe and Pinter, 2019; Bastings and Filippova, 2020; Serrano and Smith, 2019) . To address these limitations, Ghaeini et al. (2018) examine the saliency of attention and LSTM gating signal in the intermediate layers of ESIM models, an architecture designed for natural language inference tasks (Chen et al., 2017) . Their results show that visualizing attention saliency allows identifying which parts of the premise and hypothesis contribute most to the final score. Moreover, attention saliency maps compared across different ESIM models reveal differences in focus that reflect the differences in their predictions. According to this study, using saliency is much more effective than using attention alone.", "cite_spans": [ { "start": 219, "end": 241, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF104" }, { "start": 524, "end": 552, "text": "(Wiegreffe and Pinter, 2019;", "ref_id": "BIBREF109" }, { "start": 553, "end": 582, "text": "Bastings and Filippova, 2020;", "ref_id": "BIBREF9" }, { "start": 583, "end": 607, "text": "Serrano and Smith, 2019)", "ref_id": "BIBREF90" }, { "start": 640, "end": 661, "text": "Ghaeini et al. (2018)", "ref_id": "BIBREF44" }, { "start": 824, "end": 843, "text": "(Chen et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Connecting Structure to Function:", "sec_num": "4" }, { "text": "Another approach to revealing how decisions are formed across network layers is erasure, where fea-tures are deemed irrelevant if their removal has a minor effect on the prediction. De Cao et al. (2020) extend this method to learned masking and adapt it to measure the importance of intermediate states rather than the inputs. They run the proposed DIFF-MASK method on BERT (Devlin et al., 2019) and find that separator tokens play an important role in the input layer for question answering but not for sentiment classification, a task where adjectives and nouns are kept for much longer. Given that separators serve as delimiters between the question and the context, these differences shed light on the connection between the internal latent structure and the task, marking a step toward gaining some understanding of the information flow in the model.", "cite_spans": [ { "start": 374, "end": 395, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Connecting Structure to Function:", "sec_num": "4" }, { "text": "Applying neural models to the NLP domain poses specific challenges. This opens the way for research on the extent to which language-specific characteristics, such as compositionality of meaning, are reflected in the internal representations of neural networks. The work by Li et al. (2016) leverages several methods including variance-based and first-derivative saliency (a technique inspired by back-propagation), to study how models deal with compositionality of meaning, e.g., negation, intensification and combining meaning from different parts of the sentence. The study of recurrent, LSTM and bi-LSTM networks across time steps finds that, as decoding proceeds, the task (language modelling) gradually prevails overbuilding word representations.", "cite_spans": [ { "start": 273, "end": 289, "text": "Li et al. (2016)", "ref_id": "BIBREF61" } ], "ref_spans": [], "eq_spans": [], "section": "Connecting Structure to Function:", "sec_num": "4" }, { "text": "An integrated gradients (Sundararajan et al., 2017) based method of finding neurons that encode individual facts has been proposed by . This approach builds on the observation that large pre-trained language models can remember factual knowledge from the training corpus. The authors find that knowledge neurons are located in the feed-forward network of BERT and view these two-layer perceptron modules as knowledge memories in the Transformer architecture. The method allows for explicit editing of specific factual knowledge by manipulating the corresponding knowledge neurons with only a moderate influence on unrelated knowledge. These findings are in line with a work by Meng et al. (2022) that localizes factual knowledge to the feed-forward layer. Further, this approach makes a distinction between the notions of knowing and saying a fact and concludes that, while the feed-forward layers encode the former, the latter is attended to by the late self-attention.", "cite_spans": [ { "start": 24, "end": 51, "text": "(Sundararajan et al., 2017)", "ref_id": "BIBREF102" }, { "start": 677, "end": 695, "text": "Meng et al. (2022)", "ref_id": "BIBREF69" } ], "ref_spans": [], "eq_spans": [], "section": "Connecting Structure to Function:", "sec_num": "4" }, { "text": "Other approaches, e.g. SHAP, DeepLift and LIME (Lundberg and Lee, 2017; Shrikumar et al., 2019; Ribeiro et al., 2016) can reveal dependencies missed by the methods discussed here. In NLP, the key challenges include performance and, where applicable, choosing an adequate baseline for word embeddings. The dynamic progress of research in natural language processing has led researchers to review and analyze existing methods of interpreting neural models (Belinkov and Glass, 2019; Danilevsky et al., 2020) . While the emerging field of explainable AI (XAI) is seeing faster growth, a path for research and discussion on the desired evaluation criteria of interpretation methods is opening up (Jacovi and Goldberg, 2020) .", "cite_spans": [ { "start": 61, "end": 71, "text": "Lee, 2017;", "ref_id": "BIBREF65" }, { "start": 72, "end": 95, "text": "Shrikumar et al., 2019;", "ref_id": "BIBREF95" }, { "start": 96, "end": 117, "text": "Ribeiro et al., 2016)", "ref_id": "BIBREF83" }, { "start": 454, "end": 480, "text": "(Belinkov and Glass, 2019;", "ref_id": "BIBREF11" }, { "start": 481, "end": 505, "text": "Danilevsky et al., 2020)", "ref_id": "BIBREF34" }, { "start": 692, "end": 719, "text": "(Jacovi and Goldberg, 2020)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Connecting Structure to Function:", "sec_num": "4" }, { "text": "In this paper, we provide an overview of research on network structure, linguistic feature learning, their training dynamics, and explainability research that aims to connect network structure and function. In doing so, we highlight gaps in the literature and opportunities for future research, both in each individual research area and as a broad proposal for grounding research in understanding large language models. We highlight a few areas of future research as particularly important given the gaps in the current literature. For the study of how, and whether, linguistic structures are learned by language models, more work is needed to understand the training dynamics of this learning across a variety of model scales and architectures. More fundamentally, there is disagreement about what it means for a model to \"encode\" linguistic structures such as syntax, particularly in a multilingual setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" }, { "text": "More broadly, nascent work on the selforganization of neurons and subnetwork structures that emerge during training time has largely not been applied to LLMs, or neural networks in NLP more generally. Research in Computer Vision has shown the existence of emergent functional units with functions that are semantically meaningful to humans. In the context of LLMs, such structures may provide a basis for understanding the nature of linguistic features that LLMs purportedly learn, especially when comparing the development of each during training time. Additional research is needed to not only determine whether such structures emerge in LLMs, but also to apply and ex-tend the literature on self-organization in complex systems. This research can also be used for explainability. Currently, assessment of the quality of interpretations of the information flow in neural models is not straightforward. Identification of modular and emergent structures within networks may be viewed as a way of moving away from the binary definition of faithfulness as postulated by Jacovi and Goldberg (2020) . Evidence for the existence of structures aligning with human perception of language, if found, can help to enable separate consideration of plausibility from a human perspective, as proposed in the same study. More broadly, we propose grounding the study of LLMs properties in the analysis of the self-organization of weights and neurons into emergent structures.", "cite_spans": [ { "start": 1068, "end": 1094, "text": "Jacovi and Goldberg (2020)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Directions", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The theory of emergence. Philosophy of Science", "authors": [ { "first": "Reuben", "middle": [], "last": "Ablowitz", "suffix": "" } ], "year": 1939, "venue": "", "volume": "6", "issue": "", "pages": "1--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reuben Ablowitz. 1939. The theory of emergence. Phi- losophy of Science, 6(1):1-16.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks", "authors": [ { "first": "Yossi", "middle": [], "last": "Adi", "suffix": "" }, { "first": "Einat", "middle": [], "last": "Kermany", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Lavi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Open- Review.net.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Syntax-augmented multilingual BERT for cross-lingual transfer", "authors": [ { "first": "Wasi", "middle": [], "last": "Ahmad", "suffix": "" }, { "first": "Haoran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "4538--4554", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.350" ] }, "num": null, "urls": [], "raw_text": "Wasi Ahmad, Haoran Li, Kai-Wei Chang, and Yashar Mehdad. 2021. Syntax-augmented multilingual BERT for cross-lingual transfer. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4538-4554, Online. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Persistence homology of networks: methods and applications. Applied Network Science", "authors": [ { "first": "Mehmet", "middle": [], "last": "Aktas", "suffix": "" }, { "first": "Esra", "middle": [], "last": "Akbas", "suffix": "" }, { "first": "Ahmed", "middle": [ "El" ], "last": "Fatmaoui", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/s41109-019-0179-3" ] }, "num": null, "urls": [], "raw_text": "Mehmet Aktas, Esra Akbas, and Ahmed El Fatmaoui. 2019. Persistence homology of networks: methods and applications. Applied Network Science, 4.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Understanding intermediate layers using linear classifier probes", "authors": [ { "first": "Guillaume", "middle": [], "last": "Alain", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Alain and Yoshua Bengio. 2017. Under- standing intermediate layers using linear classifier probes. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenRe- view.net.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "597--610", "other_ids": { "DOI": [ "10.1162/tacl_a_00288" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond. Transactions of the Association for Computational Linguistics, 7:597- 610.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Emergence, hierarchies, and hyperstructures", "authors": [ { "first": "A", "middle": [], "last": "Nils", "suffix": "" }, { "first": "", "middle": [], "last": "Baas", "suffix": "" } ], "year": 2000, "venue": "Artificial Life III", "volume": "", "issue": "", "pages": "515--537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils A. Baas. 2000. Emergence, hierarchies, and hyper- structures. In Christopher G. Langton, editor, Artifi- cial Life III, page 515-537. Addison-Wesley Long- man Publishing Co., Inc., USA.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Quantifying emergence in terms of persistent mutual information", "authors": [ { "first": "", "middle": [], "last": "Mackay", "suffix": "" } ], "year": 2010, "venue": "Adv. Complex Syst", "volume": "13", "issue": "", "pages": "327--338", "other_ids": {}, "num": null, "urls": [], "raw_text": "MacKay. 2010. Quantifying emergence in terms of persistent mutual information. Adv. Complex Syst., 13:327-338.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?", "authors": [ { "first": "Jasmijn", "middle": [], "last": "Bastings", "suffix": "" }, { "first": "Katja", "middle": [], "last": "Filippova", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "149--155", "other_ids": { "DOI": [ "10.18653/v1/2020.blackboxnlp-1.14" ] }, "num": null, "urls": [], "raw_text": "Jasmijn Bastings and Katja Filippova. 2020. The ele- phant in the interpretability room: Why use attention as explanation when we have saliency methods? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 149-155, Online. Association for Com- putational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "What do neural machine translation models learn about morphology?", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "861--872", "other_ids": { "DOI": [ "10.18653/v1/P17-1080" ] }, "num": null, "urls": [], "raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017a. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861-872, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Analysis methods in neural language processing: A survey", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "49--72", "other_ids": { "DOI": [ "10.1162/tacl_a_00254" ] }, "num": null, "urls": [], "raw_text": "Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonatan Belinkov, Llu\u00eds M\u00e0rquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017b. Eval- uating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1-10, Taipei, Taiwan. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": { "DOI": [ "10.1162/tacl_a_00051" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Emergence in the central nervous system", "authors": [ { "first": "", "middle": [], "last": "Steven Ravett Brown", "suffix": "" } ], "year": 2013, "venue": "Cognitive Neurodynamics", "volume": "7", "issue": "3", "pages": "", "other_ids": { "DOI": [ "10.1007/s11571-012-9229-6" ] }, "num": null, "urls": [], "raw_text": "Steven Ravett Brown. 2013. Emergence in the central nervous system. Cognitive Neurodynamics, 7(3).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners", "authors": [ { "first": "Tom", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Ryder", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Subbiah", "suffix": "" }, { "first": "Jared", "middle": [ "D" ], "last": "Kaplan", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Dhariwal", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Shyam", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Sastry", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Askell", "suffix": "" }, { "first": "Sandhini", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Ariel", "middle": [], "last": "Herbert-Voss", "suffix": "" }, { "first": "Gretchen", "middle": [], "last": "Krueger", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Henighan", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Ramesh", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ziegler", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Clemens", "middle": [], "last": "Winter", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Hesse", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Sigler", "suffix": "" }, { "first": "Mateusz", "middle": [], "last": "Litwin", "suffix": "" } ], "year": null, "venue": "Advances in Neural Information Processing Systems", "volume": "33", "issue": "", "pages": "1877--1901", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "2020. Branch and bound for piecewise linear neural network verification", "authors": [ { "first": "Rudy", "middle": [], "last": "Bunel", "suffix": "" }, { "first": "Ilker", "middle": [], "last": "Turkaslan", "suffix": "" }, { "first": "H", "middle": [ "S" ], "last": "Philip", "suffix": "" }, { "first": "M", "middle": [ "Pawan" ], "last": "Torr", "suffix": "" }, { "first": "Jingyue", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Pushmeet", "middle": [], "last": "Lu", "suffix": "" }, { "first": "", "middle": [], "last": "Kohli", "suffix": "" } ], "year": null, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rudy Bunel, Ilker Turkaslan, Philip H.S. Torr, M. Pawan Kumar, Jingyue Lu, and Pushmeet Kohli. 2020. Branch and bound for piecewise linear neural net- work verification. Journal of Machine Learning Re- search, 21.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Introduction to Neurons and Neuronal Networks", "authors": [ { "first": "H", "middle": [], "last": "John", "suffix": "" }, { "first": "D", "middle": [], "last": "Byrne", "suffix": "" }, { "first": "The", "middle": [], "last": "Ph", "suffix": "" }, { "first": "", "middle": [], "last": "Ut", "suffix": "" } ], "year": 2012, "venue": "Cellular and Molecular Neurobiology", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John H Byrne, D Ph, and The Ut. 2012. Introduction to Neurons and Neuronal Networks. Cellular and Molecular Neurobiology, 1.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Modularity: Understanding the Development and Evolution of Natural Complex Systems", "authors": [ { "first": "Werner", "middle": [], "last": "Callebaut", "suffix": "" }, { "first": "Diego", "middle": [], "last": "Rasskin-Gutman", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.7551/mitpress/4734.001.0001" ] }, "num": null, "urls": [], "raw_text": "Werner Callebaut and Diego Rasskin-Gutman. 2005. Modularity: Understanding the Development and Evolution of Natural Complex Systems. The MIT Press.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Facebook AI's WMT20 news translation task submission", "authors": [ { "first": "Peng-Jen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Changhan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Williamson", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "113--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng-Jen Chen, Ann Lee, Changhan Wang, Naman Goyal, Angela Fan, Mary Williamson, and Jiatao Gu. 2020. Facebook AI's WMT20 news translation task submission. In Proceedings of the Fifth Confer- ence on Machine Translation, pages 113-125, Online. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Enhanced LSTM for natural language inference", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1657--1668", "other_ids": { "DOI": [ "10.18653/v1/P17-1152" ] }, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1657-1668, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Finding universal grammatical relations in multilingual BERT", "authors": [ { "first": "Ethan", "middle": [ "A" ], "last": "Chi", "suffix": "" }, { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.493" ] }, "num": null, "urls": [], "raw_text": "Ethan A. Chi, John Hewitt, and Christopher D. Man- ning. 2020. Finding universal grammatical relations in multilingual BERT. In Proceedings of the 58th", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "5564--5577", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 5564-5577, Online. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Pretrained language model embryology: The birth of ALBERT", "authors": [ { "first": "Cheng-Han", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Sung-Feng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6813--6828", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.553" ] }, "num": null, "urls": [], "raw_text": "Cheng-Han Chiang, Sung-Feng Huang, and Hung-yi Lee. 2020. Pretrained language model embryology: The birth of ALBERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 6813-6828, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "German", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2126--2136", "other_ids": { "DOI": [ "10.18653/v1/P18-1198" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Emerging cross-lingual structure in pretrained language models", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haoran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "2020", "issue": "", "pages": "6022--6034", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.536" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettle- moyer, and Veselin Stoyanov. 2020a. Emerging cross-lingual structure in pretrained language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 6022-6034. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Emerging cross-lingual structure in pretrained language models", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haoran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6022--6034", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.536" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettle- moyer, and Veselin Stoyanov. 2020b. Emerging cross-lingual structure in pretrained language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6022-6034, Online. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Are neural nets modular? inspecting functional modularity through differentiable weight masks", "authors": [ { "first": "R\u00f3bert", "middle": [], "last": "Csord\u00e1s", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Sjoerd Van Steenkiste", "suffix": "" }, { "first": "", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R\u00f3bert Csord\u00e1s, Sjoerd van Steenkiste, and J\u00fcrgen Schmidhuber. 2021. Are neural nets modular? in- specting functional modularity through differentiable weight masks. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Emergent behavior in flocks", "authors": [ { "first": "Felipe", "middle": [], "last": "Cucker", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Smale", "suffix": "" } ], "year": 2007, "venue": "IEEE Transactions on Automatic Control", "volume": "52", "issue": "5", "pages": "852--862", "other_ids": { "DOI": [ "10.1109/TAC.2007.895842" ] }, "num": null, "urls": [], "raw_text": "Felipe Cucker and Steve Smale. 2007. Emergent be- havior in flocks. IEEE Transactions on Automatic Control, 52(5):852-862.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Knowledge neurons in pretrained transformers", "authors": [ { "first": "Damai", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Yaru", "middle": [], "last": "Hao", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2021, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, and Furu Wei. 2021. Knowledge neurons in pretrained trans- formers. ArXiv, abs/2104.08696.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Lexical functional grammar", "authors": [ { "first": "Mary", "middle": [], "last": "Dalrymple", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mary Dalrymple. 2001. Lexical functional grammar. Brill.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A survey of the state of explainable AI for natural language processing", "authors": [ { "first": "Marina", "middle": [], "last": "Danilevsky", "suffix": "" }, { "first": "Ranit", "middle": [], "last": "Kun Qian", "suffix": "" }, { "first": "Yannis", "middle": [], "last": "Aharonov", "suffix": "" }, { "first": "Ban", "middle": [], "last": "Katsis", "suffix": "" }, { "first": "Prithviraj", "middle": [], "last": "Kawas", "suffix": "" }, { "first": "", "middle": [], "last": "Sen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "447--459", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Danilevsky, Kun Qian, Ranit Aharonov, Yan- nis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A survey of the state of explainable AI for natural lan- guage processing. In Proceedings of the 1st Confer- ence of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th Interna- tional Joint Conference on Natural Language Pro- cessing, pages 447-459, Suzhou, China. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "How do decisions emerge across layers in neural models? interpretation with differentiable masking", "authors": [ { "first": "Nicola", "middle": [], "last": "De Cao", "suffix": "" }, { "first": "Michael", "middle": [ "Sejr" ], "last": "Schlichtkrull", "suffix": "" }, { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "3243--3255", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.262" ] }, "num": null, "urls": [], "raw_text": "Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? interpretation with differentiable masking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 3243-3255, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Multilingual bert", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin. 2018. Multilingual bert.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Seven properties of selforganization in the human brain", "authors": [ { "first": "Birgitta", "middle": [], "last": "Dresp-Langley", "suffix": "" } ], "year": 2020, "venue": "Big Data and Cognitive Computing", "volume": "4", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3390/bdcc4020010" ] }, "num": null, "urls": [], "raw_text": "Birgitta Dresp-Langley. 2020. Seven properties of self- organization in the human brain. Big Data and Cog- nitive Computing, 4(2).", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Identifying elements essential for BERT's multilinguality", "authors": [ { "first": "Philipp", "middle": [], "last": "Dufter", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4423--4437", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.358" ] }, "num": null, "urls": [], "raw_text": "Philipp Dufter and Hinrich Sch\u00fctze. 2020. Identifying elements essential for BERT's multilinguality. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4423-4437, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "How transfer learning impacts linguistic knowledge in deep NLP models?", "authors": [ { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "4947--4957", "other_ids": { "DOI": [ "10.18653/v1/2021.findings-acl.438" ] }, "num": null, "urls": [], "raw_text": "Nadir Durrani, Hassan Sajjad, and Fahim Dalvi. 2021. How transfer learning impacts linguistic knowledge in deep NLP models? In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4947-4957, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Analyzing individual neurons in pre-trained language models", "authors": [ { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.02695" ] }, "num": null, "urls": [], "raw_text": "Nadir Durrani, Hassan Sajjad, Fahim Dalvi, and Yonatan Belinkov. 2020. Analyzing individual neu- rons in pre-trained language models. arXiv preprint arXiv:2010.02695.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Clusterability in neural networks", "authors": [ { "first": "Daniel", "middle": [], "last": "Filan", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Casper", "suffix": "" }, { "first": "Shlomi", "middle": [], "last": "Hod", "suffix": "" }, { "first": "Cody", "middle": [], "last": "Wild", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Critch", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Russell", "suffix": "" } ], "year": 2021, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Filan, Stephen Casper, Shlomi Hod, Cody Wild, Andrew Critch, and Stuart Russell. 2021. Cluster- ability in neural networks. CoRR, abs/2103.03386.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Self-Organization and Artificial Life", "authors": [ { "first": "Carlos", "middle": [], "last": "Gershenson", "suffix": "" }, { "first": "Vito", "middle": [], "last": "Trianni", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Werfel", "suffix": "" }, { "first": "Hiroki", "middle": [], "last": "Sayama", "suffix": "" } ], "year": 2020, "venue": "Artificial Life", "volume": "26", "issue": "3", "pages": "391--408", "other_ids": { "DOI": [ "10.1162/artl_a_00324" ] }, "num": null, "urls": [], "raw_text": "Carlos Gershenson, Vito Trianni, Justin Werfel, and Hi- roki Sayama. 2020. Self-Organization and Artificial Life. Artificial Life, 26(3):391-408.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Interpreting recurrent and attention-based neural models: a case study on natural language inference", "authors": [ { "first": "Reza", "middle": [], "last": "Ghaeini", "suffix": "" }, { "first": "Xiaoli", "middle": [], "last": "Fern", "suffix": "" }, { "first": "Prasad", "middle": [], "last": "Tadepalli", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4952--4957", "other_ids": { "DOI": [ "10.18653/v1/D18-1537" ] }, "num": null, "urls": [], "raw_text": "Reza Ghaeini, Xiaoli Fern, and Prasad Tadepalli. 2018. Interpreting recurrent and attention-based neural models: a case study on natural language inference. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 4952-4957, Brussels, Belgium. Association for Com- putational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Do syntactic probes probe syntax? experiments with jabberwocky probing", "authors": [ { "first": "Hall", "middle": [], "last": "Rowan", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Maudslay", "suffix": "" }, { "first": "", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "124--131", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.11" ] }, "num": null, "urls": [], "raw_text": "Rowan Hall Maudslay and Ryan Cotterell. 2021. Do syntactic probes probe syntax? experiments with jabberwocky probing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 124-131, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Design and evolution of modular neural network architectures", "authors": [ { "first": "L", "middle": [ "M" ], "last": "Bart", "suffix": "" }, { "first": "Jacob", "middle": [ "M J" ], "last": "Happel", "suffix": "" }, { "first": "", "middle": [], "last": "Murre", "suffix": "" } ], "year": 1994, "venue": "Neural Networks", "volume": "7", "issue": "", "pages": "6--7", "other_ids": { "DOI": [ "10.1016/S0893-6080(05)80155-8" ] }, "num": null, "urls": [], "raw_text": "Bart L.M. Happel and Jacob M.J. Murre. 1994. Design and evolution of modular neural network architec- tures. Neural Networks, 7(6-7).", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Jointly predicting predicates and arguments in neural semantic role labeling", "authors": [ { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "364--369", "other_ids": { "DOI": [ "10.18653/v1/P18-2058" ] }, "num": null, "urls": [], "raw_text": "Luheng He, Kenton Lee, Omer Levy, and Luke Zettle- moyer. 2018. Jointly predicting predicates and argu- ments in neural semantic role labeling. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 364-369, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Designing and interpreting probes with control tasks", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2733--2743", "other_ids": { "DOI": [ "10.18653/v1/D19-1275" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Percy Liang. 2019. Designing and in- terpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": { "DOI": [ "10.18653/v1/N19-1419" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word represen- tations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Ccgbank: A corpus of ccg derivations and dependency structures extracted from the penn treebank", "authors": [ { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2007, "venue": "Comput. Linguist", "volume": "33", "issue": "3", "pages": "355--396", "other_ids": { "DOI": [ "10.1162/coli.2007.33.3.355" ] }, "num": null, "urls": [], "raw_text": "Julia Hockenmaier and Mark Steedman. 2007. Ccg- bank: A corpus of ccg derivations and dependency structures extracted from the penn treebank. Comput. Linguist., 33(3):355-396.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Detecting modularity in deep neural networks", "authors": [ { "first": "Shlomi", "middle": [], "last": "Hod", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Casper", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Filan", "suffix": "" }, { "first": "Cody", "middle": [], "last": "Wild", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Critch", "suffix": "" }, { "first": "Stuart", "middle": [ "J" ], "last": "Russell", "suffix": "" } ], "year": 2021, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shlomi Hod, Stephen Casper, Daniel Filan, Cody Wild, Andrew Critch, and Stuart J. Russell. 2021. De- tecting modularity in deep neural networks. ArXiv, abs/2110.08058.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "The Influence of Synaptic Weight Distribution on Neuronal Population Dynamics", "authors": [ { "first": "Ramakrishnan", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Vilas", "middle": [], "last": "Menon", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Buice", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Koch", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Mihalas", "suffix": "" } ], "year": 2013, "venue": "PLoS Computational Biology", "volume": "", "issue": "10", "pages": "", "other_ids": { "DOI": [ "10.1371/journal.pcbi.1003248" ] }, "num": null, "urls": [], "raw_text": "Ramakrishnan Iyer, Vilas Menon, Michael Buice, Christof Koch, and Stefan Mihalas. 2013. The Influ- ence of Synaptic Weight Distribution on Neuronal Population Dynamics. PLoS Computational Biology, 9(10).", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?", "authors": [ { "first": "Alon", "middle": [], "last": "Jacovi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4198--4205", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.386" ] }, "num": null, "urls": [], "raw_text": "Alon Jacovi and Yoav Goldberg. 2020. Towards faith- fully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4198-4205, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Cross-lingual ability of multilingual bert: An empirical study", "authors": [ { "first": "K", "middle": [], "last": "Karthikeyan", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual bert: An empirical study. In International Conference on Learning Representations.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "The acquisition of polysynthetic languages", "authors": [ { "first": "Barbara", "middle": [], "last": "Kelly", "suffix": "" }, { "first": "Gillian", "middle": [], "last": "Wigglesworth", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Nordlinger", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Blythe", "suffix": "" } ], "year": 2014, "venue": "Language and Linguistics Compass", "volume": "8", "issue": "2", "pages": "51--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Kelly, Gillian Wigglesworth, Rachel Nordlinger, and Joseph Blythe. 2014. The acqui- sition of polysynthetic languages. Language and Linguistics Compass, 8(2):51-64.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Constituency parsing with a self-attentive encoder", "authors": [ { "first": "Nikita", "middle": [], "last": "Kitaev", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2676--2686", "other_ids": { "DOI": [ "10.18653/v1/P18-1249" ] }, "num": null, "urls": [], "raw_text": "Nikita Kitaev and Dan Klein. 2018. Constituency pars- ing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676-2686, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Albert: A lite bert for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.11942" ] }, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Higher-order coreference resolution with coarse-tofine inference", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "687--692", "other_ids": { "DOI": [ "10.18653/v1/N18-2108" ] }, "num": null, "urls": [], "raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to- fine inference. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 2 (Short Papers), pages 687-692, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "MLQA: Evaluating cross-lingual extractive question answering", "authors": [ { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7315--7330", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.653" ] }, "num": null, "urls": [], "raw_text": "Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: Evalu- ating cross-lingual extractive question answering. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7315- 7330, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Visualizing and understanding neural models in NLP", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xinlei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "681--691", "other_ids": { "DOI": [ "10.18653/v1/N16-1082" ] }, "num": null, "urls": [], "raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681-691, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "A feature clustering-based adaptive modular neural network for nonlinear system modeling", "authors": [ { "first": "Wenjing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Junfei", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2020, "venue": "ISA Transactions", "volume": "100", "issue": "", "pages": "185--197", "other_ids": { "DOI": [ "10.1016/j.isatra.2019.11.015" ] }, "num": null, "urls": [], "raw_text": "Wenjing Li, Meng Li, Junfei Qiao, and Xin Guo. 2020. A feature clustering-based adaptive modular neural network for nonlinear system modeling. ISA Trans- actions, 100:185-197.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Linguistic knowledge and transferability of contextual representations", "authors": [ { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1073--1094", "other_ids": { "DOI": [ "10.18653/v1/N19-1112" ] }, "num": null, "urls": [], "raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Probing across time: What does RoBERTa know and when?", "authors": [ { "first": "Zeyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yizhong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2021", "volume": "", "issue": "", "pages": "820--842", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zeyu Liu, Yizhong Wang, Jungo Kasai, Hannaneh Ha- jishirzi, and Noah A. Smith. 2021. Probing across time: What does RoBERTa know and when? In Findings of the Association for Computational Lin- guistics: EMNLP 2021, pages 820-842, Punta Cana, Dominican Republic. Association for Computational Linguistics.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "A unified approach to interpreting model predictions", "authors": [ { "first": "M", "middle": [], "last": "Scott", "suffix": "" }, { "first": "Su-In", "middle": [], "last": "Lundberg", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Architectures of neuronal circuits", "authors": [ { "first": "Liqun", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2021, "venue": "Science", "volume": "373", "issue": "6559", "pages": "", "other_ids": { "DOI": [ "10.1126/science.abg7285" ] }, "num": null, "urls": [], "raw_text": "Liqun Luo. 2021. Architectures of neuronal circuits. Science, 373(6559):eabg7285.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Xlm-t: Scaling up multilingual machine translation with pretrained cross-lingual transformer encoders", "authors": [ { "first": "Shuming", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Haoyang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Zewen", "middle": [], "last": "Chi", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Dongdong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Hany Hassan Awadalla", "suffix": "" }, { "first": "Akiko", "middle": [], "last": "Muzio", "suffix": "" }, { "first": "Saksham", "middle": [], "last": "Eriguchi", "suffix": "" }, { "first": "", "middle": [], "last": "Singhal", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.15547" ] }, "num": null, "urls": [], "raw_text": "Shuming Ma, Jian Yang, Haoyang Huang, Zewen Chi, Li Dong, Dongdong Zhang, Hany Hassan Awadalla, Alexandre Muzio, Akiko Eriguchi, Saksham Singhal, et al. 2020. Xlm-t: Scaling up multilingual machine translation with pretrained cross-lingual transformer encoders. arXiv preprint arXiv:2012.15547.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance", "authors": [ { "first": "R", "middle": [], "last": "", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Junghyun", "middle": [], "last": "Min", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "217--227", "other_ids": { "DOI": [ "10.18653/v1/2020.blackboxnlp-1.21" ] }, "num": null, "urls": [], "raw_text": "R. Thomas McCoy, Junghyun Min, and Tal Linzen. 2020. BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and In- terpreting Neural Networks for NLP, pages 217-227, Online. Association for Computational Linguistics.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Locating and editing factual knowledge in gpt", "authors": [ { "first": "Kevin", "middle": [], "last": "Meng", "suffix": "" }, { "first": "David", "middle": [], "last": "Bau", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Andonian", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual knowl- edge in gpt.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Effects of parameter norm growth during transformer training: Inductive bias from gradient descent", "authors": [ { "first": "William", "middle": [], "last": "Merrill", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Vivek Ramanujan", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Schwartz", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1766--1781", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Merrill, Vivek Ramanujan, Yoav Goldberg, Roy Schwartz, and Noah A. Smith. 2021. Effects of pa- rameter norm growth during transformer training: Inductive bias from gradient descent. In Proceedings of the 2021 Conference on Empirical Methods in Nat- ural Language Processing, pages 1766-1781, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "What the [mask]? making sense of language-specific bert models", "authors": [ { "first": "Debora", "middle": [], "last": "Nozza", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debora Nozza, Federico Bianchi, and Dirk Hovy. 2020. What the [mask]? making sense of language-specific bert models.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "What context features can transformer language models use?", "authors": [ { "first": "O'", "middle": [], "last": "Joe", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Connor", "suffix": "" }, { "first": "", "middle": [], "last": "Andreas", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "851--864", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.70" ] }, "num": null, "urls": [], "raw_text": "Joe O'Connor and Jacob Andreas. 2021. What context features can transformer language models use? In Proceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 851-864, Online. Association for Computational Linguistics.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Continual and multi-task architecture search", "authors": [ { "first": "Ramakanth", "middle": [], "last": "Pasunuru", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramakanth Pasunuru and Mohit Bansal. 2019. Contin- ual and multi-task architecture search.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Dissecting contextual word embeddings: Architecture and representation", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1499--1509", "other_ids": { "DOI": [ "10.18653/v1/D18-1179" ] }, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting contextual word embeddings: Architecture and representation. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1499- 1509, Brussels, Belgium. Association for Computa- tional Linguistics.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Efficient neural architecture search via parameter sharing", "authors": [ { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Melody", "middle": [ "Y" ], "last": "Guan", "suffix": "" }, { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. 2018. Efficient neural architecture search via parameter sharing.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "A bayesian framework for information-theoretic probing", "authors": [ { "first": "Tiago", "middle": [], "last": "Pimentel", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tiago Pimentel and Ryan Cotterell. 2021. A bayesian framework for information-theoretic probing.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "Pareto probing: Trading off accuracy for complexity", "authors": [ { "first": "Tiago", "middle": [], "last": "Pimentel", "suffix": "" }, { "first": "Naomi", "middle": [], "last": "Saphra", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tiago Pimentel, Naomi Saphra, Adina Williams, and Ryan Cotterell. 2020a. Pareto probing: Trading off accuracy for complexity. CoRR, abs/2010.02180.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "Information-theoretic probing for linguistic structure", "authors": [ { "first": "Tiago", "middle": [], "last": "Pimentel", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Valvoda", "suffix": "" }, { "first": "Rowan", "middle": [], "last": "Hall Maudslay", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Zmigrod", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020b. Information-theoretic probing for linguistic structure. CoRR, abs/2004.03061.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "How multilingual is multilingual BERT?", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4996--5001", "other_ids": { "DOI": [ "10.18653/v1/P19-1493" ] }, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "Girish Sastry", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jong", "middle": [ "Wook" ], "last": "Kim", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Hallacy", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Ramesh", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Goh", "suffix": "" }, { "first": "Sandhini", "middle": [], "last": "Agarwal", "suffix": "" } ], "year": null, "venue": "Learning transferable visual models from natural language supervision", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2103.00020" ] }, "num": null, "urls": [], "raw_text": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "why should I trust you?\": Explaining the predictions of any classifier", "authors": [ { "first": "Marco", "middle": [], "last": "Ribeiro", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", "volume": "", "issue": "", "pages": "97--101", "other_ids": { "DOI": [ "10.18653/v1/N16-3020" ] }, "num": null, "urls": [], "raw_text": "Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"why should I trust you?\": Explaining the pre- dictions of any classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demon- strations, pages 97-101, San Diego, California. As- sociation for Computational Linguistics.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "A primer in bertology: What we know about how bert works", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "842--866", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics, 8:842-866.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Advances in verification of ReLU neural networks", "authors": [ { "first": "Ansgar", "middle": [], "last": "R\u00f6ssig", "suffix": "" }, { "first": "Milena", "middle": [], "last": "Petkovic", "suffix": "" } ], "year": 2021, "venue": "Journal of Global Optimization", "volume": "", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.1007/s10898-020-00949-1" ] }, "num": null, "urls": [], "raw_text": "Ansgar R\u00f6ssig and Milena Petkovic. 2021. Advances in verification of ReLU neural networks. Journal of Global Optimization, 81(1).", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "A survey of cross-lingual word embedding models", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2019, "venue": "Journal of Artificial Intelligence Research", "volume": "65", "issue": "", "pages": "569--631", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2019. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65:569- 631.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Linspector: Multilingual probing tasks for word representations", "authors": [ { "first": "G\u00f6zde", "middle": [], "last": "G\u00fcl\u015fahin", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Vania", "suffix": "" }, { "first": "Ilia", "middle": [], "last": "Kuznetsov", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2020, "venue": "Computational Linguistics", "volume": "46", "issue": "2", "pages": "335--385", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00f6zde G\u00fcl\u015eahin, Clara Vania, Ilia Kuznetsov, and Iryna Gurevych. 2020. Linspector: Multilingual probing tasks for word representations. Computational Lin- guistics, 46(2):335-385.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.01108" ] }, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "Training dynamics of neural language models", "authors": [ { "first": "Naomi", "middle": [], "last": "Saphra", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naomi Saphra. 2021. Training dynamics of neural lan- guage models. Ph.D. thesis, The University of Edin- burgh.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Is attention interpretable?", "authors": [ { "first": "Sofia", "middle": [], "last": "Serrano", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2931--2951", "other_ids": { "DOI": [ "10.18653/v1/P19-1282" ] }, "num": null, "urls": [], "raw_text": "Sofia Serrano and Noah A. Smith. 2019. Is attention in- terpretable? In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 2931-2951, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "Optimal nonlinear prediction of random fields on networks", "authors": [ { "first": "Shalizi", "middle": [], "last": "Cosma Rohilla", "suffix": "" } ], "year": 2003, "venue": "DMCS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cosma Rohilla Shalizi. 2003. Optimal nonlinear predic- tion of random fields on networks. In DMCS.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "Automatic filters for the detection of coherent structure in spatiotemporal systems. Physical review. E, Statistical, nonlinear, and soft matter physics", "authors": [ { "first": "", "middle": [], "last": "Cosma Rohilla", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Shalizi", "suffix": "" }, { "first": "Jean-Baptiste", "middle": [], "last": "Haslinger", "suffix": "" }, { "first": "Kristina", "middle": [ "Lisa" ], "last": "Rouquier", "suffix": "" }, { "first": "Cristopher", "middle": [], "last": "Klinkner", "suffix": "" }, { "first": "", "middle": [], "last": "Moore", "suffix": "" } ], "year": 2006, "venue": "", "volume": "73", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cosma Rohilla Shalizi, Robert Haslinger, Jean-Baptiste Rouquier, Kristina Lisa Klinkner, and Cristopher Moore. 2006. Automatic filters for the detection of coherent structure in spatiotemporal systems. Physi- cal review. E, Statistical, nonlinear, and soft matter physics, 73 3 Pt 2:036104.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "Quantifying self-organization with optimal predictors", "authors": [ { "first": "Kristina", "middle": [ "Lisa" ], "last": "Cosma Rohilla Shalizi", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Shalizi", "suffix": "" }, { "first": "", "middle": [], "last": "Haslinger", "suffix": "" } ], "year": 2004, "venue": "Physical review letters", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cosma Rohilla Shalizi, Kristina Lisa Shalizi, and Robert Haslinger. 2004. Quantifying self-organization with optimal predictors. Physical review letters, 93 11:118701.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "Does string-based neural MT learn source syntax?", "authors": [ { "first": "Xing", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Inkit", "middle": [], "last": "Padhi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1526--1534", "other_ids": { "DOI": [ "10.18653/v1/D16-1159" ] }, "num": null, "urls": [], "raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1526- 1534, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "Learning important features through propagating activation differences", "authors": [ { "first": "Avanti", "middle": [], "last": "Shrikumar", "suffix": "" }, { "first": "Peyton", "middle": [], "last": "Greenside", "suffix": "" }, { "first": "Anshul", "middle": [], "last": "Kundaje", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Avanti Shrikumar, Peyton Greenside, and Anshul Kun- daje. 2019. Learning important features through propagating activation differences.", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Masked language modeling and the distributional hypothesis: Order word matters pre-training for little", "authors": [ { "first": "Koustuv", "middle": [], "last": "Sinha", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Dieuwke", "middle": [], "last": "Hupkes", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2888--2913", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021. Masked language modeling and the distributional hy- pothesis: Order word matters pre-training for little. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 2888-2913, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics.", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "The evolved transformer", "authors": [ { "first": "David", "middle": [], "last": "So", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 36th International Conference on Machine Learning", "volume": "97", "issue": "", "pages": "5877--5886", "other_ids": {}, "num": null, "urls": [], "raw_text": "David So, Quoc Le, and Chen Liang. 2019. The evolved transformer. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Pro- ceedings of Machine Learning Research, pages 5877- 5886. PMLR.", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "Primer: Searching for efficient transformers for language modeling", "authors": [ { "first": "David", "middle": [ "R" ], "last": "So", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Manke", "suffix": "" }, { "first": "Hanxiao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David R. So, Wojciech Manke, Hanxiao Liu, Zihang Dai, Noam Shazeer, and Quoc V. Le. 2021. Primer: Searching for efficient transformers for language modeling. CoRR, abs/2109.08668.", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "Future ml systems will be qualitatively different", "authors": [ { "first": "Jacob", "middle": [], "last": "Steinhardt", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Steinhardt. 2022. Future ml systems will be qualitatively different. Https://bounded- regret.ghost.io/future-ml-systems-will-be- qualitatively-different/.", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "Linguisticallyinformed self-attention for semantic role labeling", "authors": [ { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Verga", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Andor", "suffix": "" }, { "first": "David", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "5027--5038", "other_ids": { "DOI": [ "10.18653/v1/D18-1548" ] }, "num": null, "urls": [], "raw_text": "Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically- informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 5027-5038, Brussels, Belgium. Association for Com- putational Linguistics.", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "Axiomatic attribution for deep networks", "authors": [ { "first": "Mukund", "middle": [], "last": "Sundararajan", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Taly", "suffix": "" }, { "first": "Qiqi", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "3319--3328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Pro- ceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Ma- chine Learning Research, pages 3319-3328. PMLR.", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "What do you learn from context?", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Probing for sentence structure in contextualized word representations. arXiv e-prints", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.06316" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from con- text? Probing for sentence structure in contextu- alized word representations. arXiv e-prints, page arXiv:1905.06316.", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "Information-theoretic probing with minimum description length", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "183--196", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.14" ] }, "num": null, "urls": [], "raw_text": "Elena Voita and Ivan Titov. 2020. Information-theoretic probing with minimum description length. In Pro- ceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 183-196, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF107": { "ref_id": "b107", "title": "Probing pretrained language models for lexical semantics", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Edoardo", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Ponti", "suffix": "" }, { "first": "Goran", "middle": [], "last": "Litschko", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "7222--7240", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.586" ] }, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107, Edoardo Maria Ponti, Robert Litschko, Goran Glava\u0161, and Anna Korhonen. 2020. Probing pretrained language models for lexical semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7222-7240, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF108": { "ref_id": "b108", "title": "A non-linear structural probe", "authors": [ { "first": "Jennifer", "middle": [ "C" ], "last": "White", "suffix": "" }, { "first": "Tiago", "middle": [], "last": "Pimentel", "suffix": "" }, { "first": "Naomi", "middle": [], "last": "Saphra", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "132--138", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.12" ] }, "num": null, "urls": [], "raw_text": "Jennifer C. White, Tiago Pimentel, Naomi Saphra, and Ryan Cotterell. 2021. A non-linear structural probe. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 132-138, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF109": { "ref_id": "b109", "title": "Attention is not not explanation", "authors": [ { "first": "Sarah", "middle": [], "last": "Wiegreffe", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Pinter", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "11--20", "other_ids": { "DOI": [ "10.18653/v1/D19-1002" ] }, "num": null, "urls": [], "raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF110": { "ref_id": "b110", "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "833--844", "other_ids": { "DOI": [ "10.18653/v1/D19-1077" ] }, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Computational Linguis- tics.", "links": null }, "BIBREF111": { "ref_id": "b111", "title": "Are all languages created equal in multilingual BERT?", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 5th Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "120--130", "other_ids": { "DOI": [ "10.18653/v1/2020.repl4nlp-1.16" ] }, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 120-130, Online. Association for Com- putational Linguistics.", "links": null }, "BIBREF112": { "ref_id": "b112", "title": "Probing for semantic classes: Diagnosing the meaning content of word embeddings", "authors": [ { "first": "Yadollah", "middle": [], "last": "Yaghoobzadeh", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Kann", "suffix": "" }, { "first": "T", "middle": [ "J" ], "last": "Hazen", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P19-1574" ] }, "num": null, "urls": [], "raw_text": "Yadollah Yaghoobzadeh, Katharina Kann, T. J. Hazen, Eneko Agirre, and Hinrich Sch\u00fctze. 2019. Probing for semantic classes: Diagnosing the meaning con- tent of word embeddings. In Proceedings of the 57th", "links": null }, "BIBREF113": { "ref_id": "b113", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "5740--5753", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 5740-5753, Florence, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF114": { "ref_id": "b114", "title": "Deep neural networks with multi-branch architectures are intrinsically less non-convex", "authors": [ { "first": "Hongyang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Junru", "middle": [], "last": "Shao", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "", "suffix": "" } ], "year": 2020, "venue": "AISTATS 2019 -22nd International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongyang Zhang, Junru Shao, and Ruslan Salakhutdi- nov. 2020. Deep neural networks with multi-branch architectures are intrinsically less non-convex. In AISTATS 2019 -22nd International Conference on Artificial Intelligence and Statistics.", "links": null }, "BIBREF115": { "ref_id": "b115", "title": "Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task analysis", "authors": [ { "first": "Kelly", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "359--361", "other_ids": { "DOI": [ "10.18653/v1/W18-5448" ] }, "num": null, "urls": [], "raw_text": "Kelly Zhang and Samuel Bowman. 2018. Language modeling teaches you more than translation does: Lessons learned through auxiliary syntactic task anal- ysis. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 359-361, Brussels, Bel- gium. Association for Computational Linguistics.", "links": null }, "BIBREF116": { "ref_id": "b116", "title": "An information theoretic view on selecting linguistic probes", "authors": [ { "first": "Zining", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Rudzicz", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "9251--9262", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.744" ] }, "num": null, "urls": [], "raw_text": "Zining Zhu and Frank Rudzicz. 2020. An informa- tion theoretic view on selecting linguistic probes. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9251-9262, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF117": { "ref_id": "b117", "title": "A robustly optimized BERT pre-training approach with post-training", "authors": [ { "first": "Liu", "middle": [], "last": "Zhuang", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Wayne", "suffix": "" }, { "first": "Shi", "middle": [], "last": "Ya", "suffix": "" }, { "first": "Zhao", "middle": [], "last": "", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 20th Chinese National Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1218--1227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu Zhuang, Lin Wayne, Shi Ya, and Zhao Jun. 2021. A robustly optimized BERT pre-training approach with post-training. In Proceedings of the 20th Chinese National Conference on Computational Linguistics, pages 1218-1227, Huhhot, China. Chinese Informa- tion Processing Society of China.", "links": null } }, "ref_entries": {} } }