| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:39:21.777610Z" |
| }, |
| "title": "A Closer Look at the Performance of Neural Language Models on Reflexive Anaphor Licensing", |
| "authors": [ |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Massachusetts Institute of Technology Cambridge", |
| "location": { |
| "region": "MA" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Sherry", |
| "middle": [ |
| "Yong" |
| ], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "sychen@mit.edu" |
| }, |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "rplevy@mit.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "An emerging line of work uses psycholinguistic methods to evaluate the syntactic generalizations acquired by neural language models (NLMs). While this approach has shown NLMs to be capable of learning a wide range of linguistic knowledge, confounds in the design of previous experiments may have obscured the potential of NLMs to learn certain grammatical phenomena. Here we re-evaluate the performance of a range of NLMs on reflexive anaphor licensing. Under our paradigm, the models consistently show stronger evidence of learning than reported in previous work. Our approach demonstrates the value of well-controlled psycholinguistic methods in gaining a fine-grained understanding of NLM learning potential. 1", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "An emerging line of work uses psycholinguistic methods to evaluate the syntactic generalizations acquired by neural language models (NLMs). While this approach has shown NLMs to be capable of learning a wide range of linguistic knowledge, confounds in the design of previous experiments may have obscured the potential of NLMs to learn certain grammatical phenomena. Here we re-evaluate the performance of a range of NLMs on reflexive anaphor licensing. Under our paradigm, the models consistently show stronger evidence of learning than reported in previous work. Our approach demonstrates the value of well-controlled psycholinguistic methods in gaining a fine-grained understanding of NLM learning potential. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "To gain a deeper understanding of the grammatical generalizations acquired by neural language models (NLMs), an emerging line of work seeks to evaluate NLMs as \"psycholinguistic subjects\" -that is, assessing the extent to which their probability distributions conform to human judgments on linguistic data. This psycholinguistic assessment is typically done by evaluating the model on minimal pairs of sentences, which differ only at a target word or phrase that determines the acceptability of the sentence. If an NLM has learned the linguistic phenomenon in question, then it should assign higher probability to sentences that humans judge to be more acceptable. This approach has shown NLMs to be capable of learning some grammatical phenomena (e.g. subject-verb agreement and filler-gap dependencies) while failing on others (Linzen et al., 2016; Lau et al., 2017; Gulordava et al., 2018; Marvin and Linzen, 2018; Tran et al., 2018; .", |
| "cite_spans": [ |
| { |
| "start": 829, |
| "end": 850, |
| "text": "(Linzen et al., 2016;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 851, |
| "end": 868, |
| "text": "Lau et al., 2017;", |
| "ref_id": null |
| }, |
| { |
| "start": 869, |
| "end": 892, |
| "text": "Gulordava et al., 2018;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 893, |
| "end": 917, |
| "text": "Marvin and Linzen, 2018;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 918, |
| "end": 936, |
| "text": "Tran et al., 2018;", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In evaluating these mixed learning outcomes, we raise a broader question that remains largely unaddressed in the field: What is the standard to which we should be holding artificial language models? An engineering goal within the machine learning community is to build NLMs that approximate human behavior. In this case, an ideal NLM should achieve high performance even on low-frequency constructions, and the learning signal should be detectable even with coarse experimental paradigms. However, if a scientific goal is to highlight the grammatical phenomena that can be learned from sequential data, then experiments should be designed with the aim to give NLMs a fair shot at displaying successful learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We demonstrate the value of robust psycholinguistic methods in serving the latter goal by reevaluating the performance of neural language models on English reflexive anaphor licensing (RAL). For example, in John disappointed himself, the reflexive himself can refer to John, but in John knew that Paul disappointed himself, the reflexive can only refer to Paul but not John. A priori, we expect RAL to be difficult to learn for sev-eral reasons. From a theoretical perspective, multiple syntactic constraints are simultaneously operative in RAL, which may increase the complexity of the representation that needs to be learned (see Section 2.1). In addition, the appearance of a reflexive is never obligatory based on the preceding context -that is, while a reflexive requires an antecedent NP licensor, an antecedent NP never requires a reflexive downstream (see Section 2.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Previous studies have shown NLMs to fail at RAL in various syntactic configurations Marvin and Linzen, 2018) . We take a closer look at these previously reported failures, conducting new experiments that control for confounding variables and creating new materials that are compatible with small-vocabulary NLMs. Our experiments detect stronger evidence of learning than reported in previous work, demonstrating the value of robust psycholinguistic methods in studying the potential of NLMs to learn complex syntactic phenomena.", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 108, |
| "text": "Marvin and Linzen, 2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "English reflexive anaphors are licensed only when two different structural constraints are both satisfied, which we refer to as LOCALITY and C-COMMAND. These two constraints are independently motivated on theoretical grounds and underlie many syntactic configurations (e.g. Reinhart, 1983; Rizzi, 2013) .", |
| "cite_spans": [ |
| { |
| "start": 274, |
| "end": 289, |
| "text": "Reinhart, 1983;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 290, |
| "end": 302, |
| "text": "Rizzi, 2013)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reflexive anaphor licensing (RAL)", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "LOCALITY stipulates that the matching antecedent must be in the same clause as the reflexive. C-COMMAND requires the matching antecedent to be in a c-commanding relation with the reflexive (Reinhart, 1981; Chomsky, 1993) . For present purposes, it is sufficient to define ccommand as the following: if a node has any sibling nodes in a syntax tree, then it c-commands its siblings and all of their descendants; if a node has no siblings, then it c-commands everything its parent c-commands.", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 205, |
| "text": "(Reinhart, 1981;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 206, |
| "end": 220, |
| "text": "Chomsky, 1993)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reflexive anaphor licensing (RAL)", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "To illustrate these two constraints, Figure 1 shows the syntax tree for the sentence The fathers said the women near the boys saw themselves. This sentence contains three noun phrases (NPs) that could potentially act as an antecedent for themselves, but only one of them satisfies both structural requirements of RAL: (1) the higher subject NP 1 the fathers c-commands themselves but is not within the local clause, violating LO- (2) the lower subject NP 2 the women ccommands themselves locally, licensing the reflexive; (3) the linearly closest NP 3 the boys is within the local clause, but violates C-COMMAND since it is inside a prepositional phrase inside NP 2 . Thus, NP 2 the women is the only possible licensor for the reflexive themselves.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 37, |
| "end": 45, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Reflexive anaphor licensing (RAL)", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We frame our experiments in terms of the two syntactic constraints involved in RAL, i.e. LO-CALITY and C-COMMAND. This is typically done when testing the linguistic knowledge of humans, in order to probe the nature of linguistic generalizations that are being drawn across different types of constructions. In following this convention, we do not intend to claim the NLMs are learning these abstract structural properties per se.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reflexive anaphor licensing (RAL)", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The presence of a reflexive anaphor is never obligatory, in the sense that nothing in the preceding context deterministically predicts an upcoming reflexive. This contrasts with other syntactic dependencies, where the two elements of the dependency mutually require each other. In subjectverb agreement, for example, a subject NP sets the expectation for a downstream verb that agrees in number, and the verb requires a matching subject. This is also the case for less frequent constructions such as filler-gap dependencies, where the appearance of a filler wh-word sets the expectation for a gap, and the presence of a gap requires a preceding filler. This property does not hold for reflexive anaphors, as an NP never requires the appear-ance of a reflexive downstream. Thus, given an upstream reflexive licensor, there is high variance in the downstream contexts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distribution of reflexive anaphors", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Furthermore, although we are interested in reflexive anaphors that occur in an argument position, these pronouns can also occur as an intensifier adjoining right next to an NP, as in The president himself signed my book. Since the intensifier usage does not obey the same structural constraints, it has a different distribution from the anaphor usage. Both of the factors discussed above pose a challenge for NLMs to learn a robust representation for RAL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distribution of reflexive anaphors", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Previous work evaluating the ability of neural language models to learn RAL primarily builds upon the paradigms introduced in Marvin and Linzen (2018) and . Both studies conclude that NLMs fail to learn the appropriate licensing conditions for reflexives.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paradigms in previous work", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In particular, Marvin and Linzen (2018) test whether NLMs learn RAL in relative clauses and sentential complements. Consider the following sample items (1) and (2) from their study:", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 39, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paradigms in previous work", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "(1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paradigms in previous work", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The bankers who the pilot embarrassed hurt *himself / themselves.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paradigms in previous work", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "(2) The bankers thought the pilot embarrassed himself / *themselves.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paradigms in previous work", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In (1), the reflexive himself cannot be licensed by the pilot because the pilot is inside a relative clause, thus violating both LOCALITY and C-COMMAND. In (2), the reflexive themselves is embedded in a sentential complement, so the longdistance subject the bankers cannot license the reflexive for violating LOCALITY.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paradigms in previous work", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "As is typical in psycholinguistic evaluation of NLMs, previous RAL studies calculate accuracy as the proportion of trials where the model assigns higher probability to the correct reflexive given the prefix, compared to another reflexive that would make the sentence ungrammatical. Since Marvin and Linzen (2018) and test number and gender agreement, respectively, Marvin and Linzen compare the probability of himself /herself vs. themselves, while Futrell et al. compare the probability of himself vs. herself.", |
| "cite_spans": [ |
| { |
| "start": 288, |
| "end": 312, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paradigms in previous work", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "While the failures reported by these studies have been taken as evidence of the limits of NLM learning, they might be attributed to confounding fac-tors in the design of the experiments. As discussed above, previous studies measure accuracy by comparing the probability assigned to different target reflexives given the same context. However, in many standard training corpora, the reflexive pronouns themselves, himself, and herself differ dramatically in frequency, leading to an asymmetry in unigram probabilities (Table 2) . This presents a confound, as all models are likely to implicitly factor unigram probabilities when estimating conditional probabilities in context. 2 Thus, even if a model has learned correct generalizations about the relevant features of the context, these generalizations could be obscured by large differences in unigram frequency.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 517, |
| "end": 526, |
| "text": "(Table 2)", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Paradigms in previous work", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In addition, both Marvin and Linzen (2018) and use profession nouns that are almost all stereotypically male (e.g. banker, senator). However, many of these nouns occur with low frequency in standard training datasets, so existing materials cannot be used to test RAL learning in models with relatively small vocabularies.", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 42, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paradigms in previous work", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "To re-evaluate NLM learning potential of RAL, we conduct new experiments that mitigate the issues raised by unigram probability asymmetries and stereotypically gendered nouns. We describe our methods in Section 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paradigms in previous work", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Psycholinguistic evaluation of language models typically measures accuracy as the proportion of trials in which the model correctly assigns higher probability to the grammatical sentence in a minimal pair. This probability differential is affected not only by the expectations set by the context, but also by the unigram probabilities of the target words (in the case of RAL, themselves, himself, and herself ). To avoid this issue, we keep the target reflexive fixed and vary the preceding lexical items in each condition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental design", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Each sentence in our test suites has two NPs, a verb, and a target reflexive, as well as material that modulates the syntactic state (e.g. the onset of a relative clause). One NP is in a position that can license a reflexive, and the other NP is not. Our experiments have the following three conditions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditions", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 Baseline: Both NPs match the number feature of the target reflexive. The sentence is grammatical.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditions", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 Distractor: The NP in the licensing position matches the number of the target, but the other NP mismatches. The sentence is still grammatical, but contains distracting material.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditions", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 Ungrammatical: The NP in the licensing position mismatches the number of the target. The sentence is ungrammatical.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditions", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We choose to test number instead of gender feature agreement (cf. Futrell et al., 2018) because we believe models are more likely to learn a representation of number than gender, as number is more frequently marked than gender in English. There is also evidence of NLMs learning other number-based dependencies such as subject-verb agreement (Linzen et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 342, |
| "end": 363, |
| "text": "(Linzen et al., 2016)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conditions", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Our accuracy calculation involves a three-way comparison. For a given item, the model makes a correct prediction if the probability of the target reflexive in the Ungrammatical condition is lower than the probability of the target in both the Distractor and Baseline conditions. Accuracy is the proportion of items in the experiment for which the model makes the correct prediction. If the probability of the target is the same across conditions, then the prediction is considered correct with probability 1 /3. Under this measure, chance performance is 33.33%, in contrast to the 50% from existing paradigms that compare grammatical vs. ungrammatical constructions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation metric", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Nouns Previous studies on RAL use nouns denoting professions often associated with stereotypical gender, such as lumberjack and hairdresser Marvin and Linzen, 2018) . 3 However, these nouns are not inherently gendered, and manipulating the gender of the reflexive does not change the grammaticality of the sentence. Instead, we use high-frequency nouns with lexicalized gender, such as man and woman. This allows us to extend our paradigm to models with smaller vocabularies (see Section 4), for which 3 RNNs have been shown to learn NP stereotypical gender (Rudinger et al., 2018) . many profession nouns are out-of-vocabulary (e.g. hairdresser). This also ensures that our experiments can be replicated with future corpora, as the stereotypical gender of occupations represented in word embeddings can vary across time and cultures (Garg et al., 2018) . We selected a total of 10 nouns (5 female and 5 male), with the female and male nouns balanced for frequency of occurrence in the Wikipedia corpus (see Table 2 ).", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 164, |
| "text": "Marvin and Linzen, 2018)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 167, |
| "end": 168, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 502, |
| "end": 503, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 558, |
| "end": 581, |
| "text": "(Rudinger et al., 2018)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 834, |
| "end": 853, |
| "text": "(Garg et al., 2018)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1008, |
| "end": 1015, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lexical items", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Verbs We first manually constructed a list of commonly used reflexive verbs. Using this list, we calculated the relative frequency of their occurrences within a reflexive construction in the Wikipedia corpus, and selected the most frequent ones. We also selected the most frequent verbs by their raw counts in the corpus. A total of 15 verbs were selected using this method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical items", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Counterbalancing To ensure that vocabulary differences in preceding context do not confound the observed effects on the target reflexive, we counterbalance the position of nouns such that each noun occurs in a licensing and a nonlicensing position equally often. Consequently, each stimulus item has several variants, where the nouns are equally distributed across positions. Each noun also appears with each of the verbs equally often across items.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical items", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In Experiment 1, we first perform a loose replication of Marvin and Linzen (2018) by adapting their materials into our experimental paradigm. The experiment includes relative clause and sentential complement constructions, which we test in Experiments 1a and 1b, respectively. To construct the materials, we crossed 10 nouns with 7 matrix verbs from the original Marvin and Linzen study, resulting in a total of 70 items per pronoun.", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 81, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Logic of experiments", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "As discussed in Section 2.3, one issue with previous studies is the choice to use lexical items with stereotypical gender. In subsequent experiments, we create new test suites with materials using lexicalized gender. In Experiments 2a and 2b, we use our new materials to test relative clause and sentential complement constructions, respectively, for comparison with Experiments 1a and 1b.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Logic of experiments", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Since the relative clause construction tests both LOCALITY and C-COMMAND and the sentential complement construction only tests LOCALITY, we test prepositional phrases in Experiment 3 to isolate the effect of C-COMMAND. We cross 4 nouns with 15 verbs, resulting in 60 items for each pronoun in each of Experiments 2 and 3. 4 Table 1 shows sample items for Experiments 1-3 along with corresponding items from the original Marvin and Linzen (2018) study.", |
| "cite_spans": [ |
| { |
| "start": 322, |
| "end": 323, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 324, |
| "end": 331, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Logic of experiments", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We evaluate RAL in six neural language models, as well as a baseline n-gram model. Together, the models cover a range of vocabulary sizes, architectures, and inductive biases (Table 2) . Our goal here is not to draw general conclusions about certain architectures or training regimes, but to present results across a diverse set of models, including those that were previously untestable due to experimental design.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 175, |
| "end": 184, |
| "text": "(Table 2)", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Language models", |
| "sec_num": "4" |
| }, |
| { |
| "text": "GRNN and JRNN Recurrent neural networks (RNNs; Elman, 1990; Mikolov et al., 2010) perform well in language modeling, with long shortterm memory (LSTM) networks (Hochreiter and Schmidhuber, 1997; Sundermeyer et al., 2012) be-ing the most popular variant. We test two LSTMs that differ significantly in vocabulary size and have been shown to learn syntactic dependencies to varying degrees of success. The Gulordava et al. (2018) LSTM (\"GRNN\") was trained on a subset of English Wikipedia with 90M training tokens. The Jozefowicz et al. (2016) LSTM (\"JRNN\") was trained on the One Billion Word Benchmark (Chelba et al., 2013) . JRNN additionally has convolutional neural network character input embeddings.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 59, |
| "text": "Elman, 1990;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 60, |
| "end": 81, |
| "text": "Mikolov et al., 2010)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 160, |
| "end": 194, |
| "text": "(Hochreiter and Schmidhuber, 1997;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 195, |
| "end": 220, |
| "text": "Sundermeyer et al., 2012)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 602, |
| "end": 623, |
| "text": "(Chelba et al., 2013)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language models", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Transformer-XL and BERT Next, we test two models based on the Transformer architecture (Vaswani et al., 2017) . Transformer-XL (\"TransXL\"; Dai et al., 2019) reuses the hidden states obtained in previous segments, which facilitates modeling of long-term dependencies. BERT (Devlin et al., 2018) is bi-directional, in that it is trained to predict the identity of masked words based on the preceding and following context. 5 Both models were trained on document-level corpora instead of shuffled sentences: WikiText-103 (Merity et al., 2017) for TransXL, and a combination of BooksCorpus (Zhu et al., 2015) and Wikipedia for BERT. Recent work has shown BERT to perform well on reflexive constructions (Goldberg, 2019) .", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 109, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 272, |
| "end": 293, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 421, |
| "end": 422, |
| "text": "5", |
| "ref_id": null |
| }, |
| { |
| "start": 518, |
| "end": 539, |
| "text": "(Merity et al., 2017)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 586, |
| "end": 604, |
| "text": "(Zhu et al., 2015)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 699, |
| "end": 715, |
| "text": "(Goldberg, 2019)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Language models", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The last two neural models in our test suite have identical vocabularies but differing inductive biases: a recurrent neural network grammar (\"RNNG\"; Dyer et al., 2016) and a vanilla LSTM (\"TinyLSTM\"). Both models were trained on the 1-million-word English Penn Treebank \u00a72-21 (Marcus et al., 1993) , but TinyLSTM is only trained on the terminal word sequences, while RNNG is trained on the full annotations, which contain complete constituency parses. This minimal difference allows us to observe the effect of structural supervision, which has been shown to be beneficial in acquiring certain grammatical dependencies (Kuncoro et al., 2017; Wilcox et al., 2019) . Crucially, the vocabulary of these models is too small to acommodate the lexical items used in previous RAL studies.", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 167, |
| "text": "Dyer et al., 2016)", |
| "ref_id": null |
| }, |
| { |
| "start": 276, |
| "end": 297, |
| "text": "(Marcus et al., 1993)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 619, |
| "end": 641, |
| "text": "(Kuncoro et al., 2017;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 642, |
| "end": 662, |
| "text": "Wilcox et al., 2019)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RNNG and TinyLSTM", |
| "sec_num": null |
| }, |
| { |
| "text": "n-gram As a baseline, we test a 5-gram model trained on the same Wikipedia data as GRNN. We use Kneser-Ney smoothing to perform backoff.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RNNG and TinyLSTM", |
| "sec_num": null |
| }, |
| { |
| "text": "In practice, we calculate accuracy (see Section 3.2) by comparing differentials in log probability space at the target pronoun. To obtain the log probability of word w i assigned by the LSTMs and Transformer models, we compute", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computing word probabilities", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "log 2 p(w i |h i 1 ),", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Computing word probabilities", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where h i 1 is the model's hidden state before observing w i . This probability is calculated from the model's softmax activation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computing word probabilities", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To obtain the log probability of w i in the RNNG, we follow the method used in Hale et al. (2018) . We use word-synchronous beam search (Stern et al., 2017) to find the most likely incremental parses, and sum their forward probabilities to approximate P (w 1 , . . . , w i + 1 ) and P (w 1 , . . . , w i 1 ). We use 100 for the action beam size and 10 for the word beam size.", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 97, |
| "text": "Hale et al. (2018)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 136, |
| "end": 156, |
| "text": "(Stern et al., 2017)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computing word probabilities", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In contrast to the other models in our test suite, BERT is bi-directional. To obtain the log probability of w i , we first feed BERT a sentence with w i masked out and obtain the word predictions for the masked position. This gives us a probability distribution over words. In practice, since the target reflexive in our items always occurs directly before the final token '.', we do not expect the right context to modulate predictions about the target differently across conditions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computing word probabilities", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The original materials of Marvin and Linzen (2018) use profession nouns that are stereotypically male.", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 50, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Marvin and Linzen (2018)", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Since these nouns are out-ofvocabulary for RNNG and TinyLSTM, we run this experiment only on the large-vocabulary models (BERT, TransXL, JRNN, GRNN, 5-gram).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 1: Marvin and Linzen (2018)", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We first investigate RAL learning in the relative clause construction (see Table 1 ). Here, the NP inside the relative clause cannot license the reflexive, as such a relationship would violate both LOCALITY and C-COMMAND. Our design differs from Marvin and Linzen (2018) in that we hold the reflexive anaphor constant while varying the context, with the position of the nouns counterbalanced.", |
| "cite_spans": [ |
| { |
| "start": 246, |
| "end": 270, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 75, |
| "end": 82, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Exp. 1a: M&L relative clause", |
| "sec_num": null |
| }, |
| { |
| "text": "Accuracy scores from the original study and Table 3 : Accuracy scores for each experiment, with 95% confidence intervals shown below where applicable. Accuracy is computed at the item-level for each pronoun, then averaged across all pronouns. Chance accuracy is 33.33%, except for entries marked with \u2020 or *, where chance is 50%. The BERT results marked with \u2020 come from Goldberg (2019) , while the GRNN and 5-gram results marked with * come directly from Marvin and Linzen (2018) . These results are also not directly comparable to each other due to the bi-directionality of BERT; see Goldberg (2019) and Wolf (2019) for details.", |
| "cite_spans": [ |
| { |
| "start": 371, |
| "end": 386, |
| "text": "Goldberg (2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 456, |
| "end": 480, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 586, |
| "end": 601, |
| "text": "Goldberg (2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 606, |
| "end": 617, |
| "text": "Wolf (2019)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 51, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Exp. 1a: M&L relative clause", |
| "sec_num": null |
| }, |
| { |
| "text": "our Experiment 1 are reported in Table 3 (top two rows). Accuracy is computed at the itemlevel for each pronoun, then averaged across all pronouns. Under our evaluation method, GRNN shows considerable improvement over what was reported in Marvin and Linzen (2018) , while the 5-gram model remains at chance. While our metrics are not strictly comparable, the original study reports near-chance accuracy (55% \u21e0 50%), while we report accuracy well above chance (70% 33.33%). BERT achieves slightly lower accuracy under our paradigm than was reported in Goldberg (2019) (76% vs. 80%); note, however, that our chance baseline is lower.", |
| "cite_spans": [ |
| { |
| "start": 239, |
| "end": 263, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 33, |
| "end": 40, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Exp. 1a: M&L relative clause", |
| "sec_num": null |
| }, |
| { |
| "text": "Exp. 1b: M&L sentential complement Next, we investigate RAL learning in the sentential complement construction. Here, the long-distance subject cannot license the reflexive embedded in a sentential complement, because such a relationship would violate LOCALITY (while satisfying C-COMMAND). As in Exp. 1a, our approach differs from Marvin and Linzen (2018) in that we hold the reflexive anaphor constant while varying the context, with the position of the nouns counterbalanced. All large-vocabulary neural models perform near ceiling in our paradigm, despite our metric having a lower baseline. GRNN achieves 100% accuracy, showing a marked improvement over previously reported results (Table 3) . Overall, the models exhibit the correct trend for the sentential complement construction (Exp. 1b), but the pattern is less clear for the relative clause construction (Exp. 1a). One possible explanation is that in a relative clause, the licensing NP is linearly farther away from the reflexive than the distracting NP; a global preference for linear proximity may have obscured learning of structural adjacency.", |
| "cite_spans": [ |
| { |
| "start": 332, |
| "end": 356, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 687, |
| "end": 696, |
| "text": "(Table 3)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Exp. 1a: M&L relative clause", |
| "sec_num": null |
| }, |
| { |
| "text": "The materials used in Marvin and Linzen (2018) (and our Experiment 1) involve items with stereotypically gendered nouns. This raises two potential issues: (1) gender biases may overshadow number mismatch effects, and (2) the materials can only be used to evaluate models with reasonably large vocabularies. As in Experiment 1, the design of Experiment 2 differs from Marvin and Linzen (2018) in that we hold the reflexive anaphor constant while varying the context. In addition, we create new materials using nouns with lexicalized gender rather than stereotypical gender. This allows us to evaluate all seven models in our test suite.", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 46, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 367, |
| "end": 391, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 2", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Exp. 2a: Relative clause As in Exp. 1a, we first test RAL learning in the relative clause construc- Red bars: Ungrammatical-Baseline differential at target reflexive. If the models learn the correct generalization for RAL, then the red bars should be both positive and higher than the blue bars. Top (Exp. 1b:) Distractor-Baseline differential is significantly higher at herself than himself or themselves. The stimuli contain materials that are out-of-vocabulary for TinyLSTM and RNNG. Bottom (Exp. 2b): For the large-vocabulary models, the Distractor-Baseline differential is comparable across pronouns. For the small-vocabulary models, the differential is significantly higher at herself. tion using our new set of materials. Accuracy scores are high for most of the large-vocabulary neural models (BERT, TransXL, JRNN) and above chance for GRNN, but at or below chance for the other models (Table 3) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 894, |
| "end": 903, |
| "text": "(Table 3)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 2", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Exp. 2b: Sentential complement In Experiment 3, we test the sentential complement construction using our materials. As shown in Table 1, we place the reflexive inside a complement clause, such that either both c-commanding NPs match the number feature of the reflexive (Baseline), or there is one mismatching NP either in the non-local subject position (Distractor) or the local subject position (Ungrammatical).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 2", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "All large-vocabulary neural models perform near ceiling (Table 3 ). The small-vocabulary models RNNG and TinyLSTM achieve lower accuracy, but RNNG outperforms TinyLSTM.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 56, |
| "end": 64, |
| "text": "(Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 2", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Since previous studies have focused on the relative clause and sentential complement constructions, C-COMMAND has not been tested separately from LOCALITY. In Experiment 3, we hold LOCALITY constant while manipulating C-COMMAND by placing a distractor NP inside a non-c-commanding PP modifier in the local sub-ject NP. No clausal boundary is introduced. As in Experiment 2, our approach differs from Marvin and Linzen (2018) in that we hold the reflexive anaphor constant while varying the context, and we use nouns with lexicalized gender.", |
| "cite_spans": [ |
| { |
| "start": 400, |
| "end": 424, |
| "text": "Marvin and Linzen (2018)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 3", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Accuracy scores are reported in the bottom section of Table 3 . Performance is well above chance for all neural models except TinyLSTM. RNNG shows a clear advantage over TinyLSTM (62% vs. 43%).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 54, |
| "end": 61, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment 3", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Thus far, we have reported accuracy scores averaged across the three reflexive pronouns ( Table 3) . The three pronouns are weighted equally in the reported numbers, as accuracy is computed at the level of each item.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 90, |
| "end": 98, |
| "text": "Table 3)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Asymmetry between himself & herself", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Next, we investigate differences in performance across reflexive anaphors. Figure 2 shows the results of this cross-pronoun comparison for Experiments 1b and 2b, which both use the sentential complement construction (LOCALITY only). Blue bars show the Distractor-Baseline log probability differential at the target reflexive. Red bars show the Ungrammatical-Baseline log probability differential at the target reflexive. If the models learn the correct generalization for RAL, then the red bars should be both positive (i.e. above baseline) and higher than the blue bars.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 75, |
| "end": 83, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Asymmetry between himself & herself", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In Experiment 1b, which uses profession nouns that are primarily associated with men, 6 the Distractor-Baseline differential (blue bars) is significantly higher at herself than at himself or themselves. In contrast, in Experiment 2b, which uses nouns with lexicalized gender, there is only a significant difference between the Distractor-Baseline differentials at himself and herself for the small-vocabulary models TinyLSTM and RNNG.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Asymmetry between himself & herself", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We hypothesize that this can be attributed to the choice of vocabulary items. In the Distractor condition of Experiment 1, the distracting noun is plural and has stereotypically male gender (e.g. senators). The features of this noun partially match with himself (in stereotypical gender but not number), but match in neither feature with herself, leading to a higher Distractor-Baseline differential for herself. This is not an issue in Experiments 2 and 3, where all nouns match in gender feature with the target reflexive across conditions. However, training data with a low number of occurrences of herself can still lead to a high Distractor-Baseline differential, as is the case in Experiment 3 for TinyLSTM and RNNG.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Asymmetry between himself & herself", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "This pattern may also result from a more general asymmetry between gender stereotypes: encountering herself after a stereotypically male noun is more surprising than encountering himself after a stereotypically female noun. Interestingly, asymmetry also manifests in human production biases, where gendered pronoun production and interpretation are not mutually calibrated (Boyce et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 373, |
| "end": 393, |
| "text": "(Boyce et al., 2019)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Asymmetry between himself & herself", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In this paper, we used new experiments to reevaluate the performance of neural language models on reflexive anaphor licensing. Our methods address issues in previous studies, such as unigram probability asymmetries between target pronouns and the choice to use nouns with stereotypical gender, which may have led to an underestimation of learning signal. The results suggest that NLMs are learning more about RAL than they have previously been given credit for, and demonstrates the 6 11 out of these 12 nouns are stereotypically male according to United States Census data (Bureau of Labor Statistics, 2017). value of robust psycholinguistic methods in highlighting the potential of NLMs to learn complex syntactic phenomena.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The value of our approach extends beyond RAL. If we seek to understand the linguistic generalizations that NLMs can potentially acquire, then we must design our experiments to give NLMs a fair shot at displaying successful learning, regardless of the phenomenon under study.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Of course, the generalizations acquired by NLMs may not be well characterized in linguistic terms such as LOCALITY and C-COMMAND, but rather properties of the data that are irrelevant to structural considerations. Further experiments will be required to deepen our understanding of the generalizations underlying the successes and failures of these models on this and other evaluation tasks. More generally, future work in this domain should carefully address hypotheses about language learning, keeping in mind complementary questions that arise from engineering and scientific agendas.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Code and data are available at https://github. com/jennhu/reflexive-anaphor-licensing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A unigram frequency is one of the easiest things for a neural model to learn, e.g. as the bias term in the output layer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "To counterbalance the position of the nouns, there are 6 variants of each item (2 per condition) for himself and herself, and 12 variants of each item (4 per condition) for themselves.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We use the small, uncased version of BERT (BERTBASE) with no fine-tuning after the initial pre-training tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Tal Linzen, Peng Qian, and the anonymous reviewers for their insightful comments. J.H. is supported by an NSF Graduate Research Fellowship.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Remember 'him', forget 'her': Gender bias in the comprehension of pronomial referents", |
| "authors": [ |
| { |
| "first": "Veronica", |
| "middle": [], |
| "last": "Boyce", |
| "suffix": "" |
| }, |
| { |
| "first": "Till", |
| "middle": [], |
| "last": "Titus Von Der Malsburg", |
| "suffix": "" |
| }, |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Poppels", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "32nd Annual CUNY Conference on Human Sentence Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Veronica Boyce, Titus von der Malsburg, Till Poppels, and Roger Levy. 2019. Remember 'him', forget 'her': Gender bias in the comprehension of prono- mial referents. In 32nd Annual CUNY Conference on Human Sentence Processing.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Labor force statistics from the current population survey", |
| "authors": [], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bureau of Labor Statistics. 2017. Labor force statistics from the current population survey.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "One billion word benchmark for measuring progress in statistical language modeling", |
| "authors": [ |
| { |
| "first": "Ciprian", |
| "middle": [], |
| "last": "Chelba", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Ge", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. Tech- nical report, Google.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Lectures on government and binding: The Pisa lectures. 9", |
| "authors": [ |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Chomsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noam Chomsky. 1993. Lectures on government and binding: The Pisa lectures. 9. Walter de Gruyter.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Transformer-XL: Attentive language models beyond a fixed-length context", |
| "authors": [ |
| { |
| "first": "Zihang", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhilin", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yiming", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [ |
| "G" |
| ], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "2978--2988", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 2978-2988.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "BERT: pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language un- derstanding. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Finding structure in time", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [ |
| "L" |
| ], |
| "last": "Elman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Cognitive Science", |
| "volume": "14", |
| "issue": "2", |
| "pages": "179--211", |
| "other_ids": { |
| "DOI": [ |
| "10.1207/s15516709cog1402_1" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179-211.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Futrell", |
| "suffix": "" |
| }, |
| { |
| "first": "Ethan", |
| "middle": [], |
| "last": "Wilcox", |
| "suffix": "" |
| }, |
| { |
| "first": "Takashi", |
| "middle": [], |
| "last": "Morita", |
| "suffix": "" |
| }, |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, and Roger Levy. 2018. RNNs as psycholinguistic sub- jects: Syntactic state and grammatical dependency. CoRR, abs/1809.01329.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of", |
| "authors": [ |
| { |
| "first": "Nikhil", |
| "middle": [], |
| "last": "Garg", |
| "suffix": "" |
| }, |
| { |
| "first": "Londa", |
| "middle": [], |
| "last": "Schiebinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Zou", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Sciences", |
| "volume": "115", |
| "issue": "16", |
| "pages": "3635--3644", |
| "other_ids": { |
| "DOI": [ |
| "10.1073/pnas.1720347115" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635-E3644.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Assessing BERT's syntactic abilities", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Goldberg. 2019. Assessing BERT's syntactic abilities. CoRR, abs/1901.05287.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Colorless green recurrent networks dream hierarchically", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Gulordava", |
| "suffix": "" |
| }, |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Linzen", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "1195--1205", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N18-1108" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205, New Orleans, Louisiana. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Finding syntax in human encephalography with beam search", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Hale", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Adhiguna", |
| "middle": [], |
| "last": "Kuncoro", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [ |
| "R" |
| ], |
| "last": "Brennan", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers)", |
| "volume": "", |
| "issue": "", |
| "pages": "2727--2736", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Hale, Chris Dyer, Adhiguna Kuncoro, and Jonathan R. Brennan. 2018. Finding syntax in hu- man encephalography with beam search. In Pro- ceedings of the 56th Annual Meeting of the Associ- ation for Computational Linguistics (Long Papers), pages 2727-2736, Melbourne, Australia. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural Computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/neco.1997.9.8.1735" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Exploring the limits of language modeling", |
| "authors": [ |
| { |
| "first": "Rafal", |
| "middle": [], |
| "last": "Jozefowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Yonghui", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1602.02410" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "What do recurrent neural network grammars learn about syntax?", |
| "authors": [ |
| { |
| "first": "Adhiguna", |
| "middle": [], |
| "last": "Kuncoro", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Lingpeng", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 15th Conference of the European Chapter", |
| "volume": "1", |
| "issue": "", |
| "pages": "1249--1258", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1249-1258, Valencia, Spain. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A probabilistic view of linguistic knowledge", |
| "authors": [ |
| { |
| "first": "Acceptability", |
| "middle": [], |
| "last": "Grammaticality", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Probability", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Cognitive Science", |
| "volume": "5", |
| "issue": "", |
| "pages": "1202--1247", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Grammaticality, acceptability, and probabil- ity: A probabilistic view of linguistic knowledge. Cognitive Science, 5:1202-1247.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", |
| "authors": [ |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Linzen", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Dupoux", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "In Transactions of the Association for Computational Linguistics", |
| "volume": "4", |
| "issue": "", |
| "pages": "521--535", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. In Transactions of the Association for Computational Linguistics, vol- ume 4, pages 521-535.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Building a large annotated corpus of English: The Penn Treebank", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [ |
| "P" |
| ], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [ |
| "Ann" |
| ], |
| "last": "Marcinkiewicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19:313-330.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Targeted syntactic evaluation of language models", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Marvin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Linzen", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1192--1202", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Pointer sentinel mixture models", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Merity", |
| "suffix": "" |
| }, |
| { |
| "first": "Caiming", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Bradbury", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture models. In Proceedings of ICLR.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Recurrent neural network based language model", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Karafi\u00e1t", |
| "suffix": "" |
| }, |
| { |
| "first": "Lukas", |
| "middle": [], |
| "last": "Burget", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Cernock\u1ef3", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Khudanpur", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 11th Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "1045--1048", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Pro- ceedings of the 11th Annual Conference of the Inter- national Speech Communication Association, pages 1045-1048, Makuhari, Chiba, Japan.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Definite NP anaphora and ccommand domains", |
| "authors": [ |
| { |
| "first": "Tanya", |
| "middle": [], |
| "last": "Reinhart", |
| "suffix": "" |
| } |
| ], |
| "year": 1981, |
| "venue": "Linguistic Inquiry", |
| "volume": "12", |
| "issue": "4", |
| "pages": "605--635", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tanya Reinhart. 1981. Definite NP anaphora and c- command domains. Linguistic Inquiry, 12(4):605- 635.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Anaphora and semantic interpretation", |
| "authors": [ |
| { |
| "first": "Tanya", |
| "middle": [], |
| "last": "Reinhart", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tanya Reinhart. 1983. Anaphora and semantic inter- pretation. Routledge.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Gender bias in coreference resolution", |
| "authors": [ |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Rudinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Naradowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Leonard", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "8--14", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 8-14, New Orleans, Louisiana. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Effective inference for generative neural parsing", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Stern", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Fried", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1695--1700", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell Stern, Daniel Fried, and Dan Klein. 2017. Ef- fective inference for generative neural parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1695-1700. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "LSTM neural networks for language modeling", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Sundermeyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralf", |
| "middle": [], |
| "last": "Schluter", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Thirteenth Annual Conference of the International Speech Communication Association", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martin Sundermeyer, Ralf Schluter, and Hermann Ney. 2012. LSTM neural networks for language model- ing. In Thirteenth Annual Conference of the Inter- national Speech Communication Association.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "The importance of being recurrent for modeling hierarchical structure", |
| "authors": [ |
| { |
| "first": "Ke", |
| "middle": [], |
| "last": "Tran", |
| "suffix": "" |
| }, |
| { |
| "first": "Arianna", |
| "middle": [], |
| "last": "Bisazza", |
| "suffix": "" |
| }, |
| { |
| "first": "Christof", |
| "middle": [], |
| "last": "Monz", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "4731--4736", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ke Tran, Arianna Bisazza, and Christof Monz. 2018. The importance of being recurrent for modeling hi- erarchical structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 4731-4736, Brussels, Bel- gium. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "30", |
| "issue": "", |
| "pages": "5998--6008", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran As- sociates, Inc.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "What do RNN language models learn about fillergap dependencies?", |
| "authors": [ |
| { |
| "first": "Ethan", |
| "middle": [], |
| "last": "Wilcox", |
| "suffix": "" |
| }, |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Takashi", |
| "middle": [], |
| "last": "Morita", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Futrell", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "211--221", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do RNN language models learn about fillergap dependencies? In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 211-221, Brussels, Belgium. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Structural supervision improves learning of non-local grammatical dependencies", |
| "authors": [ |
| { |
| "first": "Ethan", |
| "middle": [], |
| "last": "Wilcox", |
| "suffix": "" |
| }, |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Qian", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Futrell", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballestros", |
| "suffix": "" |
| }, |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "3302--3312", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballestros, and Roger Levy. 2019. Structural super- vision improves learning of non-local grammatical dependencies. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 3302-3312, Minneapo- lis, Minnesota. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Some additional experiments extending the tech report \"Assessing BERT's syntactic abilities\" by Yoav Goldberg", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Wolf. 2019. Some additional experiments ex- tending the tech report \"Assessing BERT's syntactic abilities\" by Yoav Goldberg. Technical report, Hug- gingFace, Inc.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", |
| "authors": [ |
| { |
| "first": "Yukun", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Kiros", |
| "suffix": "" |
| }, |
| { |
| "first": "Rich", |
| "middle": [], |
| "last": "Zemel", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Raquel", |
| "middle": [], |
| "last": "Urtasun", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Torralba", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanja", |
| "middle": [], |
| "last": "Fidler", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the IEEE International Conference on Computer Vision (ICCV)", |
| "volume": "", |
| "issue": "", |
| "pages": "19--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE In- ternational Conference on Computer Vision (ICCV), pages 19-27, Washington, DC. IEEE Computer So- ciety.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Syntax tree for example sentence. While each NP agrees in number with the reflexive themselves, only NP 2 occurs in a position that can license it.CALITY;", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Negative log probability differential at target reflexive in sentential complement construction. Error bars are bootstrapped 95% confidence intervals. Blue bars: Distractor-Baseline differential at target reflexive.", |
| "uris": null |
| }, |
| "TABREF0": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td>Condition</td><td>Example sentence</td></tr><tr><td>LOCALITY & C-COMMAND</td><td/><td/></tr><tr><td>Relative clause (M&L)</td><td/><td/></tr><tr><td>LOCALITY ONLY</td><td/><td/></tr><tr><td>Sentential complement (M&L)</td><td>Grammatical</td><td>The bankers thought the pilot hurt herself</td></tr><tr><td colspan=\"3\">Ungrammatical *C-COMMAND ONLY</td></tr><tr><td>Prepositional phrase (Exp. 3)</td><td>Baseline</td><td>The {mother,</td></tr></table>", |
| "text": "Grammatical The bankers who the pilot embarrassed hurt themselves Ungrammatical *The bankers who the pilot embarrassed hurt herself Relative clause (Exp. 1a) Baseline The {banker, pilot} that the {pilot, banker} embarrassed hurt herself Distractor The {banker, pilot} that the {pilots, bankers} embarrassed hurt herself Ungrammatical *The {bankers, pilots} that the {pilot, banker} embarrassed hurt herself Relative clause (Exp. 2a) Baseline The {mother, girl} that the {girl, mother} liked saw herself Distractor The {mother, girl} that the {girls, mothers} liked saw herself Ungrammatical *The {mothers, girls} that the {girl, mother} liked saw herself The bankers thought the pilot hurt themselves Sentential complement (Exp. 1b) Baseline The {banker, pilot} said that the {pilot, banker} hurt herself Distractor The {bankers, pilots} said that the {pilot, banker} hurt herself Ungrammatical *The {banker, pilot} said that the {pilots, bankers} hurt herself Sentential complement (Exp. 2b) Baseline The {mother, girl} said that the {girl, mother} saw herself Distractor The {mothers, girls} said that the {girl, mother} saw herself Ungrammatical *The {mother, girl} said that the {girls, mothers} saw herself girl} near the {girl, mother} saw herself Distractor The {mother, girl} near the {girls, mothers} saw herself Ungrammatical *The {mothers, girls} near the {girl, mother} saw herself" |
| }, |
| "TABREF1": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "Sample stimuli for herself in our experiments and the original Marvin and Linzen (\"M&L\") study." |
| }, |
| "TABREF3": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "Language models evaluated in our experiments, along with raw frequency counts of reflexives in the training data. Pre-training data was not publicly released for BERT." |
| } |
| } |
| } |
| } |