ACL-OCL / Base_JSON /prefixW /json /W19 /W19-0112.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W19-0112",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:31:29.133686Z"
},
"title": "Jabberwocky Parsing: Dependency Parsing with Lexical Noise",
"authors": [
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": "",
"affiliation": {},
"email": "jkasai@cs.washington.edu"
},
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": "",
"affiliation": {},
"email": "robert.frank@yale.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Parsing models have long benefited from the use of lexical information, and indeed current state-of-the art neural network models for dependency parsing achieve substantial improvements by benefiting from distributed representations of lexical information. At the same time, humans can easily parse sentences with unknown or even novel words, as in Lewis Carroll's poem Jabberwocky. In this paper, we carry out jabberwocky parsing experiments, exploring how robust a state-of-the-art neural network parser is to the absence of lexical information. We find that current parsing models, at least under usual training regimens, are in fact overly dependent on lexical information, and perform badly in the jabberwocky context. We also demonstrate that the technique of word dropout drastically improves parsing robustness in this setting, and also leads to significant improvements in outof-domain parsing.",
"pdf_parse": {
"paper_id": "W19-0112",
"_pdf_hash": "",
"abstract": [
{
"text": "Parsing models have long benefited from the use of lexical information, and indeed current state-of-the art neural network models for dependency parsing achieve substantial improvements by benefiting from distributed representations of lexical information. At the same time, humans can easily parse sentences with unknown or even novel words, as in Lewis Carroll's poem Jabberwocky. In this paper, we carry out jabberwocky parsing experiments, exploring how robust a state-of-the-art neural network parser is to the absence of lexical information. We find that current parsing models, at least under usual training regimens, are in fact overly dependent on lexical information, and perform badly in the jabberwocky context. We also demonstrate that the technique of word dropout drastically improves parsing robustness in this setting, and also leads to significant improvements in outof-domain parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Since the earliest days of statistical parsing, lexical information has played a major role (Collins, 1996 (Collins, , 1999 Charniak, 2000) . While some of the performance gains that had been derived from lexicalization can be gotten in other ways (Klein and Manning, 2003) , thereby avoiding increases in model complexity and problems in data sparsity (Fong and Berwick, 2008) , recent neural network models of parsing across a range of formalisms continue to use lexical information to guide parsing decisions (constituent parsing Dyer et al. (2016) ); dependency parsing: Chen and Manning (2014) ; Kiperwasser and Goldberg (2016) ; Dozat and Manning (2017) ; CCG parsing: Ambati et al. (2016) ; TAG parsing: Kasai et al. (2018) ; Shi and \u21e4 Work done at Yale University. Lee (2018) ). These models exploit lexical information in a way that avoids some of the data sparsity issues, by making use of distributed representations (i.e., word embeddings) that support generalization across different words.",
"cite_spans": [
{
"start": 92,
"end": 106,
"text": "(Collins, 1996",
"ref_id": "BIBREF6"
},
{
"start": 107,
"end": 123,
"text": "(Collins, , 1999",
"ref_id": "BIBREF7"
},
{
"start": 124,
"end": 139,
"text": "Charniak, 2000)",
"ref_id": "BIBREF3"
},
{
"start": 248,
"end": 273,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF21"
},
{
"start": 353,
"end": 377,
"text": "(Fong and Berwick, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 533,
"end": 551,
"text": "Dyer et al. (2016)",
"ref_id": "BIBREF9"
},
{
"start": 575,
"end": 598,
"text": "Chen and Manning (2014)",
"ref_id": "BIBREF4"
},
{
"start": 601,
"end": 632,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF20"
},
{
"start": 635,
"end": 659,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF8"
},
{
"start": 675,
"end": 695,
"text": "Ambati et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 711,
"end": 730,
"text": "Kasai et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 773,
"end": 783,
"text": "Lee (2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While humans certainly make use of lexical information in sentence processing (MacDonald et al., 1994; Trueswell and Tanenhaus, 1994) , it is also clear that we are able to analyze sentences in the absence of known words. This can be seen most readily by our ability to understand Lewis Carroll's poem, Jabberwocky (Carroll, 1883) , in which open class items are replaced by non-words.",
"cite_spans": [
{
"start": 78,
"end": 102,
"text": "(MacDonald et al., 1994;",
"ref_id": "BIBREF24"
},
{
"start": 103,
"end": 133,
"text": "Trueswell and Tanenhaus, 1994)",
"ref_id": "BIBREF38"
},
{
"start": 315,
"end": 330,
"text": "(Carroll, 1883)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Twas brillig, and the slithy toves Did gyre and gimble in the wabe; All mimsy were the borogoves, And the mome raths outgrabe Work in neurolinguistics and psycholinguistics has demonstrated the human capacity for unlexicalized parsing experimentally, showing that humans can analyze syntactic structure even in presence of pseudo-words (Stromswold et al., 1996; Friederici et al., 2000; Kharkwal, 2014) .",
"cite_spans": [
{
"start": 336,
"end": 361,
"text": "(Stromswold et al., 1996;",
"ref_id": "BIBREF37"
},
{
"start": 362,
"end": 386,
"text": "Friederici et al., 2000;",
"ref_id": "BIBREF11"
},
{
"start": 387,
"end": 402,
"text": "Kharkwal, 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The word embeddings used by current lexicalized parsers are of no help in sentences with nonce words. Yet, it is at present unknown the degree to which these parsers are dependent on the information contained in these embeddings. Parsing evaluation on such nonce sentences is, therefore, critical to bridge the gap between cognitive models and data-driven machine learning models in sentence processing. Moreover, understanding the degree to which parsers are dependent upon lexical information is also of practical importance. It is advantageous for a syntactic parser to generalize well across different domains. Yet, heavy reliance upon lexical information could have detrimental effects on out-of-domain parsing because lexical input will carry genre-specific information (Gildea, 2001) .",
"cite_spans": [
{
"start": 776,
"end": 790,
"text": "(Gildea, 2001)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we investigate the contribution of lexical information (via distributed lexical representations) by focusing on a state-of-the-art graphbased dependency parsing model (Dozat and Manning, 2017 ) in a series of controlled experiments. Concretely, we simulate jabberwocky parsing by adding noise to the representation of words in the input and observe how parsing performance varies. We test two types of noise: one in which words are replaced with an out-of-vocabulary word without a lexical representation, and a second in which words are replaced with others (with associated lexical representations) that match in their Penn TreeBank (PTB)-style fine-grained part of speech. The second approach is similar to the method that Gulordava et al. (2018) propose to assess syntactic generalization in LSTM language models.",
"cite_spans": [
{
"start": 182,
"end": 206,
"text": "(Dozat and Manning, 2017",
"ref_id": "BIBREF8"
},
{
"start": 741,
"end": 764,
"text": "Gulordava et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In both cases, we find that the performance of the state-of-the-art graph parser dramatically suffers from the noise. In fact, we show that the performance of a lexicalized graph-based parser is substantialy worse than an unlexicalized graphbased parser in the presence of lexical noise, even when the lexical content of frequent or function words is preserved. This dependence on lexical information presents a severe challenge when applying the parser to a different domain or heterogeneous data, and we will demonstrate that indeed parsers trained on the PTB WSJ corpus achieve much lower performance on the Brown corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the positive side, we find that word dropout (Iyyer et al., 2015) , applied more aggressively than is commonly done (Kiperwasser and Goldberg, 2016; de Lhoneux et al., 2017; Nguyen et al., 2017; Ji et al., 2017; Dozat and Manning, 2017; Bhat et al., 2017; Peng et al., 2017 Peng et al., , 2018 , remedies the susceptibility to lexical noise. Furthermore, our results show that models trained on the PTB WSJ corpus with word dropout significantly outperform those trained without word dropout in parsing the out-of-domain Brown corpus, confirming the practical significance of jabberwocky parsing experiments.",
"cite_spans": [
{
"start": 48,
"end": 68,
"text": "(Iyyer et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 119,
"end": 151,
"text": "(Kiperwasser and Goldberg, 2016;",
"ref_id": "BIBREF20"
},
{
"start": 152,
"end": 176,
"text": "de Lhoneux et al., 2017;",
"ref_id": "BIBREF22"
},
{
"start": 177,
"end": 197,
"text": "Nguyen et al., 2017;",
"ref_id": "BIBREF28"
},
{
"start": 198,
"end": 214,
"text": "Ji et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 215,
"end": 239,
"text": "Dozat and Manning, 2017;",
"ref_id": "BIBREF8"
},
{
"start": 240,
"end": 258,
"text": "Bhat et al., 2017;",
"ref_id": null
},
{
"start": 259,
"end": 276,
"text": "Peng et al., 2017",
"ref_id": "BIBREF32"
},
{
"start": 277,
"end": 296,
"text": "Peng et al., , 2018",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here we focus ourselves on a graph-based parser with deep biaffine attention (Dozat and Manning, 2017) , a state-of-the-art graph-based parsing model, and assess its ability to generalize over lexical noise.",
"cite_spans": [
{
"start": 77,
"end": 102,
"text": "(Dozat and Manning, 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models",
"sec_num": "2"
},
{
"text": "The input for each word is the concatenation of a 100-dimensional embedding of the word and a 25-dimensional embedding of the PTB part of speech (POS). We initialize all word embeddings to be a zero vector and the out-of-vocabulary word is also mapped to a zero vector in testing. The POS embeddings are randomly initialized. We do not use any pretrained word embeddings throughout our experiments in order to encourage the model to find abstractions over the POS embeddings. Importantly, PTB POS categories also encode morphological features that should be accessible in jabberwocky situations. We also conducted experiments by taking as input words, universal POS tags, and character CNNs (Ma and Hovy, 2016) . We observed similar patterns throughout the experiments. While those approaches can more easily scale to other languages, one concern is that the character CNNs can encode the identity of short words along side their morphological properties, and therefore would not achieve a pure jabberwocky situation. For this reason, we only present results using fine-grained POS.",
"cite_spans": [
{
"start": 691,
"end": 710,
"text": "(Ma and Hovy, 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "Biaffine Parser Figure 1 shows our biaffine parsing architecture.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 24,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "Following Dozat and Manning (2017) and Kiperwasser and Goldberg (2016) , we use BiLSTMs to obtain features for each word in a sentence. We first perform unla-beled arc-factored scoring using the final output vectors from the BiLSTMs, and then label the resulting arcs. Specifically, suppose that we score edges coming into the ith word in a sentence i.e. assigning scores to the potential parents of the ith word. Denote the final output vector from the BiL-STM for the kth word by h k and suppose that h k is d-dimensional. Then, we produce two vectors from two separate multilayer perceptrons (MLPs) with the ReLU activation:",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF8"
},
{
"start": 39,
"end": 70,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "h arc-dep k = MLP (arc-dep) (h k ) h arc-head k = MLP (arc-head) (h k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "where h arc-dep k and h arc-head k are d arc -dimensional vectors that represent the kth word as a dependent and a head respectively. Now, suppose the kth row of matrix H (arc-head) is h arc-head k . Then, the probability distribution s i over the potential heads of the ith word is computed by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s i = softmax(H (arc-head) W (arc) h arc-dep i +H (arc-head) b (arc) )",
"eq_num": "(1)"
}
],
"section": "Input Representations",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "W (arc) 2 R darc\u21e5darc and b (arc) 2 R darc .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "In training, we simply take the greedy maximum probability to predict the parent of each word. In the testing phase, we use the heuristics formulated by Dozat and Manning (2017) to ensure that the resulting parse is single-rooted and acyclic. Given the head prediction of each word in the sentence, we assign labeling scores using vectors obtained from two additional MLP with ReLU. For the kth word, we obtain:",
"cite_spans": [
{
"start": 153,
"end": 177,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "h rel-dep k = MLP (rel-dep) (h k ) h rel-head k = MLP (rel-head) (h k ) where h rel-dep k , h rel-head k 2 R d rel .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "Let p i be the index of the predicted head of the ith word, and r be the number of dependency relations in the dataset. Then, the probability distribution`i over the possible dependency relations of the arc pointing from the p i th word to the ith word is calculated by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i = softmax(h T (rel-head) p i U (rel) h (rel-dep) i +W (rel) (h (rel-head) i + h (rel-head) p i ) + b (rel) )",
"eq_num": "(2)"
}
],
"section": "Input Representations",
"sec_num": null
},
{
"text": "where U (rel) 2 R d rel \u21e5d rel \u21e5r , W (rel) 2 R r\u21e5d rel , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "b (rel) 2 R r .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "We generally follow the hyperparameters chosen in Dozat and Manning (2017) . Specifically, we use BiLSTMs layers with 400 units each. Input, layer-to-layer, and recurrent dropout rates are all 0.33. The depth of all MLPs is 1, and the MLPs for unlabeled attachment and those for labeling contain 500 (d arc ) and 100 (d rel ) units respectively. We train this model with the Adam algorithm to minimize the sum of the cross-entropy losses from head predictions (s i from Eq. 1) and label predictions (`i from Eq. 2) with`= 0.001 and batch size 100 (Kingma and Ba, 2015). After each training epoch, we test the parser on the dev set. When labeled attachment score (LAS) does not improve on five consecutive epochs, training ends.",
"cite_spans": [
{
"start": 50,
"end": 74,
"text": "Dozat and Manning (2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Input Representations",
"sec_num": null
},
{
"text": "Dropout regularizes neural networks by randomly setting units to zero with some probability during training (Srivastava et al., 2014) . In addition to usual dropout, we consider applying word dropout, a variant of dropout that targets entire words and therefore entire rows in the lexical embedding matrix (Iyyer et al., 2015) . The intuition we follow here is that a trained network will be less dependent on lexical information, and more successful in a jabberwocky context, if lexical information is less reliably present during training. We consider a number of ways of word dropout. Iyyer et al. (2015) introduced the regularization technique of word dropout in which lexical items are replaced by the \"unknown\" word with some fixed probability p and demonstrated that it improves performance for the task of text classification. Replacing words with the out-of-vocabulary word exposes the networks to out-of-vocabulary words that only occur in testing. In our experiments, we will use word dropout rates of 0.2, 0.4, 0.6, and 0.8.",
"cite_spans": [
{
"start": 108,
"end": 133,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF36"
},
{
"start": 306,
"end": 326,
"text": "(Iyyer et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 588,
"end": 607,
"text": "Iyyer et al. (2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dropout as Regularization",
"sec_num": "3"
},
{
"text": "Dropping words with the same probability across the vocabulary might not behave as an ideal regularizer. The network's dependence on frequent words or function words is less likely to lead to overfitting on the training data or corpus-specific properties, as the distribution of such words is less variable across different corpora. To avoid penalizing the networks for utilizing lexical information (in the form of word embeddings) for frequent words, Kiperwasser and Goldberg (2016) propose that word dropout should be applied to a word with a probability inversely proportional to the word's frequency. Specifically, they drop out each word w that appears #(w) times in the training data with probability:",
"cite_spans": [
{
"start": 453,
"end": 484,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-based Word Dropout",
"sec_num": null
},
{
"text": "p w = \u21b5 #(w) + \u21b5 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-based Word Dropout",
"sec_num": null
},
{
"text": "Kiperwasser and Goldberg (2016) set \u21b5 = 0.25, which leads to relatively little word dropout. In our WSJ training data, \u21b5 = 0.25 yields an expected word dropout rate of 0.009 in training, an order of magnitude less than commonly used rates in uniform word dropout. We experimented with \u21b5 = 0.25, 1, 40, 352, 2536 where the last three values yield expected word dropout rates of 0.2, 0.4, and 0.6 (the uniform dropout rates we consider). In fact, we will confirm that \u21b5 needs to be much larger to significantly improve robustness to lexical noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-based Word Dropout",
"sec_num": null
},
{
"text": "Open Class Word Dropout The frequencybased word dropout scheme punishes the model less for relying upon frequent words in the training data. However, some words may occur frequently in the training data because of corpusspecific properties of the data. For instance, in the PTB WSJ training data, the word \"company\" is the 40th most frequent word. If our aim is to construct a parser that can perform well in different domains or across heterogeneous data, the networks should not depend upon such corpusspecific word senses. Hence, we propose to apply word dropout only on open class (non-function) words with a certain probability. We experimented with open class word dropout rates of 0.38 and 0.75 (where open class words are zeroed out 38% or 75% of the time), corresponding to the expected overall dropout rates of 0.2 and 0.4 respectively. To identify open class words in the data we used the following criteria. We consider a word as an open class word if and only if: 1) the gold UPOS is \"NOUN\", \"PROPN\", \"NUM\", \"ADJ\", or \"ADV\", or 2) the gold UPOS is \"VERB\" and the the gold XPOS (PTB POS) is not \"MD\" and the lemma is not \"'be\", \"have\", or \"do\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequency-based Word Dropout",
"sec_num": null
},
{
"text": "We test trained parsers on input that contains two types of lexical noise, designed to assess their ability to abstract away from idiosyncratic/collocational properties of lexical items: 1) colorless green noise and 2) jabberwocky noise. The former randomly exchanges words with PTB POS preserved, and the latter zeroes outs the embeddings for words (i.e. replacing words with an out-of-vocabulary word). In either case, we keep POS input to the parsers intact.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Colorless Green Experiments Gulordava et al. (2018) propose a framework to evaluate the generalization ability of LSTM language models that abstracts away from idiosyncratic properties of words or collocational information. In particular, they generate nonce sentences by randomly replacing words in the original sentences while preserving part-of-speech and morphological features. This can be thought of as a computational approach to producing sentences that are \"grammatical\" yet meaningless, exemplified by the famous example \"colorless green ideas sleep furiously\" (Chomsky, 1957 Jabberwocky Experiments One potential shortcoming with the approach above is that it produces sentences which might violate constraints that are imposed by specific lexical items, but which are not represented by the POS category. For instance, this approach could generate a sentence like \"it stays the shuttle\" in which the intransitive verb \"stay\" takes an object (Gulordava et al., 2018 Out-of-domain experiments We also explore a practical aspect of our experiments with lexical noise. We apply our parsers that are trained on the WSJ corpus to the Brown corpus and observe how parsers with various configurations perform. 5 Prior work showed that parsers trained on WSJ yield degraded performance on the Brown corpus (Gildea, 2001) despite the fact that the average sentence length is shorter in the Brown corpus (23.85 tokens for WSJ; 20.56 for Brown). We show that robustness to lexical noise improves outof-domain parsing.",
"cite_spans": [
{
"start": 28,
"end": 51,
"text": "Gulordava et al. (2018)",
"ref_id": "BIBREF13"
},
{
"start": 571,
"end": 585,
"text": "(Chomsky, 1957",
"ref_id": "BIBREF5"
},
{
"start": 953,
"end": 976,
"text": "(Gulordava et al., 2018",
"ref_id": "BIBREF13"
},
{
"start": 1214,
"end": 1215,
"text": "5",
"ref_id": null
},
{
"start": 1309,
"end": 1323,
"text": "(Gildea, 2001)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Baseline Parsers Lexical information is clearly useful for certain parsing decisions, such as PPattachment. As a result, a lexicalized parser clearly should make use of such information when 3 Prior work in psycholinguistics argued that verbs can in fact be used in novel argument structure constructions, and assigned coherent interpretations on the fly (Johnson and Goldberg, 2013) . Our colorless green parsing experiments can be interpreted as a simulation for such situations. 4 An anonymous reviewer notes that because of its greater complexity, human performance on a jabberwocky version of the WSJ corpus may not be at the level we find when reading the sentences of Lewis Carroll's poem or in the psycholinguistic work that has explored human ability to process jabberwocky-like sentences. We leave it for future work to explore whether human performance in such complex cases is indeed qualitatively different, and also whether the pattern of results changes if we restrict our focus to a syntactically simpler corpus, given a suitable notion of simplicity. 5 We initially intended to apply our trained parsers to the Universal Dependency corpus (Nivre et al., 2015) as well for out-of-domain experiments, but we found annotation inconsistency and the problem of conversion from phrase structures to universal dependencies. We leave this problem for future.",
"cite_spans": [
{
"start": 191,
"end": 192,
"text": "3",
"ref_id": null
},
{
"start": 355,
"end": 383,
"text": "(Johnson and Goldberg, 2013)",
"ref_id": "BIBREF16"
},
{
"start": 482,
"end": 483,
"text": "4",
"ref_id": null
},
{
"start": 1068,
"end": 1069,
"text": "5",
"ref_id": null
},
{
"start": 1156,
"end": 1176,
"text": "(Nivre et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "it is available, and may well perform less well when it is not. In fact, in jabberwocky and colorless green settings, the absence of lexical information may lead to an underdetermination of the parse by the POS or word sequence, so that there is no non-arbitrary \"gold standard\" parse. As a result, simply observing a performance drop of a parser in the face of lexical noise does not help to establish an appropriate baseline with respect to how well a parser can be expected to perform in a lexically noisy setting. We propose three baseline parsers: 1) an unlexicalized parser where the network input is only POS tags, 2) a \"top 100\" parser where the network input is only POS tags and lexical information for the 100 most frequent words and 3) a \"function word\" parser where the network input is only POS tags and lexical information for function words. Each baseline parser can be thought of as specialized to the corresponding colorless green and jabberwocky experiments. For example, the unlexicalized parser gives us an upper bound for full colorless green and jabberwocky experiments because the parser is ideally adapted to the unlexicalized situation, as it has no dependence on lexical information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We use Universal Dependency representations obtained from converting the Penn Treebank (Marcus et al., 1993) using Stanford CoreNLP (ver. 3.8.0) . We follow the standard data split: sections 2-21, 22, and 23 for training, dev, and test sets respectively. For the out-of-domain experiments, we converted the Brown corpus in PTB again using Stanford CoreNLP into Universal Dependency representations. 6 We only use gold POS tags in training for simplicity, 7 but we conduct experiments with both gold and predicted POS tags. Experiments with gold POS tags allow us to isolate the effect of lexical noise from POS tagging errors, while those with predicted POS tags simulate more practical situations where POS input is not fully reliable. Somewhat surprisingly, however, we find that relative performance patterns do not change even when using predicted POS tags. All pre-dicted POS tags are obtained from a BiLSTM POS tagger with character CNNs, trained on the same training data (sections 2-21) with hyperparameters from Ma and Hovy (2016) and word embeddings initialized with GloVe vectors (Pennington et al., 2014) . We train 5 parsing models for each training configuration with 5 different random initializations and report the mean and standard deviation. 8 We use the CoNLL 2017 official script for evaluation (Zeman et al., 2017) .",
"cite_spans": [
{
"start": 87,
"end": 108,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF26"
},
{
"start": 399,
"end": 400,
"text": "6",
"ref_id": null
},
{
"start": 1021,
"end": 1039,
"text": "Ma and Hovy (2016)",
"ref_id": "BIBREF23"
},
{
"start": 1091,
"end": 1116,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 1261,
"end": 1262,
"text": "8",
"ref_id": null
},
{
"start": 1316,
"end": 1336,
"text": "(Zeman et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": null
},
{
"text": "Normal Parsing Results Table 1 shows normal parsing results on the dev set. In both gold and predicted POS experiments, we see a significant discrepancy between the performance in rows 2-4 and the rest, suggesting that lexicalization of a dependency parser greatly contributes to parsing performance; having access to the most frequent 100 words (row 3) or the function words (row 4) recovers part of the performance drop from unlexicalization (row 2), but the LAS differences from complete lexicalization (row 1, row 5 and below) are still significant. For each of the three word dropout schemes in gold POS experiments, we see a common pattern: performance improves up to a certain degree of word dropout (Uniform 0.2, Frquencybased 1-40, Open Class 0.38), and it drops after as word dropout becomes more aggressive. This suggests that word dropout also involves the biasvariance trade-off. Although performance generally degrades with predicted POS tags, the patterns of relative performance still hold. Again, for each of the three dropout schemes, there is a certain point in the spectrum of word dropout intensity that achieves the best performance, and such points are almost the same both in the models trained with gold and predicted POS tags. This is a little surprising because a higher word dropout rate encourages the model to rely more on POS input, and noisy POS information from the POS tagger can work against the model. Indeed, we observed this parallelism between experiments with gold and predicted POS tags consistently throughout the colorless green and jabberwocky experiments, and therefore we only report results with gold POS tags for the rest of the colorless green and jabberwocky experiments for simplicity. Full Experiments Table 2 shows results for full colorless green and jabberwocky experiments. The models without word dropout yield extremely poor performance both in colorless and jabberwocky settings, suggesting that a graph-based parsing model learns to rely heavily on word information if word dropout is not performed. Here, unlike the normal parsing results, we see monotone increasing performance as word dropout is more aggressively applied, and the performance rises more dramatically. In particular, with uniform word dropout rate 0.2, full jabberwocky performance increases by more than 40 LAS points, suggesting the importance of the parser's exposure to unknown words to abstract away from lexical information. Frequency-based word dropout needs to be performed more aggressively (\u21b5 40) than has previously been done for dependency parsing (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017) in order to achieve robustness to full lexical noise similar to that obtained with uniform word dropout with p > 0.2. Open class word dropout does not bring any benefit to parsers in the full jabberwocky and colorless green settings. This is probably because parsers trained with open class word dropout has consistent access to function words, and omitting the lexical representations of the function words is very harmful to such parsers. Interestingly, in some of the cases, colorless green outperforms jabberwocky performance, perhaps because noisy word information, even with argument constraint violations, is better than no word information. Top 100 Experiments Table 3 shows the results of top 100 colorless green and jabberwocky experiments. Performance by parsers trained without word dropout is substantially better than what is found in the full colorless green or jabberwocky settings. However, the performance is still much lower than the unlexicalized parsers (2.7 LAS points for colorless green and 2.0 LAS points for jabberwocky), meaning that the parser without word dropout has significant dependence on less frequent words. On the other hand, parsers trained with a high enough word dropout rate outperform the unlexicalized parser (e.g., uniform 0.4, frequency-based \u21b5 = 40, and open class 0.38).",
"cite_spans": [
{
"start": 2589,
"end": 2621,
"text": "(Kiperwasser and Goldberg, 2016;",
"ref_id": "BIBREF20"
},
{
"start": 2622,
"end": 2646,
"text": "Dozat and Manning, 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 1",
"ref_id": "TABREF3"
},
{
"start": 1754,
"end": 1761,
"text": "Table 2",
"ref_id": "TABREF5"
},
{
"start": 3316,
"end": 3323,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5"
},
{
"text": "Frequency-based word dropout is very effective. Recall that \u21b5 = 352, 2536 correspond to the expected word dropout rates of 0.4 and 0.6. The two configurations yield better results than uniform 0.4 and 0.6. Table 4 gives the results of open class colorless green and jabberwocky experiments. We see similar patterns to the top 100 jabberwocky experiments except that open class word dropout performs better and frequencybased word dropout performs worse, which naturally follows from the way we train the parsers.",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 213,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5"
},
{
"text": "Out-of-domain Parsing Table 5 reports parsing performance on the CF domain in the Brown corpus. 9 POS tagging accuracy was 96.4%, relatively low as compared to in-domain performance in WSJ. Gildea (2001) mance in the Brown corpus. In contrast, our results show that lexicalization of the neural network dependency parsers via distributed representations facilitates parsing performance. Most of the gains from lexicalization stem from the 100 most frequent words (row 3) and function words (row 4), but we get a significant improvement by performing relatively aggressive word dropout (\u21b5 = 40). This confirms the practical significance of jabberwocky parsing experiments.",
"cite_spans": [
{
"start": 190,
"end": 203,
"text": "Gildea (2001)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Open Class Experiments",
"sec_num": null
},
{
"text": "Colorless Green vs. Jabberwocky Earlier we hypothesized that colorless green lexical noise could mislead the parser by violating argument structure constraints. And in fact, we saw in the previous section that jabberwocky parsing results are generally better than colorless green parsing results. In order to further test our hypothesis, we provide a breakdown of parsing performance broken down by dependency type (Nivre et al., 2015) . Table 6 provides the open class performance breakdown for a parser trained with uniform word dropout rate 0.2. We observe that in the colorless green situation, the parser particularly suffers in \"iobj\" and \"dobj\", consistent with what we would expect for parsing sentences with argument structure violations. For example, the colorless green scheme does not prevent us from replacing a ditransitive verb with a non-ditransitive verb. This result validates our hypothesis, and the colorless green scheme is limited as a framework to purely assess parser's generalization ability.",
"cite_spans": [
{
"start": 415,
"end": 435,
"text": "(Nivre et al., 2015)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 438,
"end": 445,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Analysis of Results",
"sec_num": "6"
},
{
"text": "Trained Word Embeddings The logic of our jabberwocky experiments and the role of word dropout involves the assumption that the parser will succeed in the absence of lexical information if it depends more heavily on the grammatical category information present in the POS embeddings. However, this fails to allow for the possibility that the word embeddings themselves might also represent POS information. It might instead be the case that what word dropout does is merely discouraging the model from constructing grammatical abstractions in the word embeddings, depending instead on the information present in the POS embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Results",
"sec_num": "6"
},
{
"text": "To test this possibility, we attempt to use a simple softmax regression on top of the induced word embeddings to detect the presence of grammatical category information. Concretely, we find the set of words that occur 100 times or more in the training data whose unigram probability for the most frequent POS tag is greater than 0.5. In this way, we obtain a set of pairs of a word and its most frequent POS tag, and we then randomly split the set to run 5-fold cross validation. Figure 2 shows POS prediction accuracy over varying uniform word dropout rates. These results do not support the idea that word dropout reduces the representation of grammatical category information in word embedding. On the contrary, word dropout rates of 0.2 through 0.6 leads to word embeddings that better represent POS.",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 488,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis of Results",
"sec_num": "6"
},
{
"text": "Parser Weight Sizes What then is the effect of word dropout on the network? Figure 3 shows L2 norms of the trained POS embeddings (46 by 25) and the input/output gate weights for POS input in the first BiLSTM layer (400 by 25) across varying uniform word dropout rates. We can observe that L2 norm for each becomes larger as the word dropout rate increases. This suggests that what word dropout does is to encourage the model to make greater use of POS information. ",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 84,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Results",
"sec_num": "6"
},
{
"text": "We conducted extensive analysis on the robustness of a state-of-the-art graph-based dependency parser against lexical noise. Our experiments showed that parsers trained under usual regimens performed poorly in the face of lexical noise. However, we demonstrated that the technique of word dropout, when applied aggressively, remedies this problem without sacrificing parsing performance. Word dropout is commonly used in literature, but our results provide further guidance about how it should be used in the future. In future work, we would like to compare our results with different parsing architectures such as transitionbased parsers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "We use the same criteria for open class words as in the open class word dropout.2 This shortcoming might be overcome by using lexical resources like PropBank(Palmer et al., 2005) or NomBank(Meyers et al., 2004) to guide word substitutions. In this paper, we do not do this, follow Gulordava et al.'s approach for the creation of colorless green sentences. We instead use the jabberwocky manipulation to avoid creating sentences that violate selectional constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We exclude the domains of CL and CP in the Brown corpus because the Stanford CoreNLP converter encountered an error.7 One could achieve better results by training a parser on predicted POS tags obtained from jackknife training, but improving normal parsing performance is not the focus of our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our code is available at https://github.com/ jungokasai/graph_parser for easy replication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We found similar patterns in relative performance when applied to the other domains of the Brown corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Shift-reduce ccg parsing using neural network models",
"authors": [
{
"first": "Bharat",
"middle": [
"Ram"
],
"last": "Ambati",
"suffix": ""
},
{
"first": "Tejaswini",
"middle": [],
"last": "Deoskar",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharat Ram Ambati, Tejaswini Deoskar, and Mark Steedman. 2016. Shift-reduce ccg parsing using neural network models. In NAACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Leveraging newswire treebanks for parsing conversational data with argument scrambling",
"authors": [],
"year": 2017,
"venue": "IWPT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riyaz Ahmad Bhat, Irshad Ahmad Bhat, and Dipti Misra Sharma. 2017. Leveraging newswire treebanks for parsing conversational data with argu- ment scrambling. In IWPT.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Through the Looking-Glass",
"authors": [
{
"first": "Lewis",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis Carroll. 1883. Through the Looking-Glass. Macmillan and Co., New York.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A maximum-entropy-inspired parser",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "ANLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In ANLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In EMNLP, pages 740-750, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Syntactic Structures. Mouton & Co",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1957,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. 1957. Syntactic Structures. Mouton & Co., Berlin, Germany.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A new statistical parser based on bigram lexical dependencies",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1996,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1996. A new statistical parser based on bigram lexical dependencies. In ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. . Head-Driven Statistical Mod- els for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher Manning. 2017. Deep biaffine attention for neural dependency parsing. In ICLR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Recurrent neural network grammars",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In NAACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Treebank parsing and knowledge of language : A cognitive perspective",
"authors": [
{
"first": "Sandiway",
"middle": [],
"last": "Fong",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"C"
],
"last": "Berwick",
"suffix": ""
}
],
"year": 2008,
"venue": "CogSci",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandiway Fong and Robert C. Berwick. 2008. Tree- bank parsing and knowledge of language : A cogni- tive perspective. In CogSci.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Auditory language comprehension: an event-related fmri study on the processing of syntactic and lexical information",
"authors": [
{
"first": "Angela",
"middle": [
"D"
],
"last": "Friederici",
"suffix": ""
},
{
"first": "Mia",
"middle": [
"Viktoria"
],
"last": "Meyer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yves Von Cramon",
"suffix": ""
}
],
"year": 2000,
"venue": "Brain and language",
"volume": "75",
"issue": "",
"pages": "289--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela D. Friederici, Mia Viktoria Meyer, and D. Yves von Cramon. 2000. Auditory language comprehen- sion: an event-related fmri study on the processing of syntactic and lexical information. Brain and lan- guage, 75 3:289-300.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Corpus variation and parser performance",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2001,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea. 2001. Corpus variation and parser per- formance. In EMNLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Colorless green recurrent networks dream hierarchically",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Gulordava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In NAACL. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deep unordered composition rivals syntactic methods for text classification",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Manjunatha",
"suffix": ""
},
{
"first": "Jordan",
"middle": [
"L"
],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan L. Boyd- Graber, and Hal Daum\u00e9. 2015. Deep unordered composition rivals syntactic methods for text clas- sification. In ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A fast and lightweight system for multilingual dependency parsing",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Yuanbin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
}
],
"year": 2017,
"venue": "CoNLL Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Ji, Yuanbin Wu, and Man Lan. 2017. A fast and lightweight system for multilingual dependency parsing. In CoNLL Shared Task.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Evidence for automatic accessing of constructional meaning: Jabberwocky sentences prime associated verbs",
"authors": [
{
"first": "Matt",
"middle": [
"A"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "Adele",
"middle": [
"E"
],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2013,
"venue": "Language and Cognitive Processes",
"volume": "28",
"issue": "10",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt A. Johnson and Adele E. Goldberg. 2013. Ev- idence for automatic accessing of constructional meaning: Jabberwocky sentences prime associ- ated verbs. Language and Cognitive Processes, 28(10):14391452.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "End-to-end graph-based TAG parsing with neural networks",
"authors": [
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Pauli",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Merrill",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jungo Kasai, Robert Frank, Pauli Xu, William Merrill, and Owen Rambow. 2018. End-to-end graph-based TAG parsing with neural networks. In NAACL. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Taming the Jabberwocky: Examining Sentence Processing with Novel Words",
"authors": [
{
"first": "Gaurav",
"middle": [],
"last": "Kharkwal",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaurav Kharkwal. 2014. Taming the Jabberwocky: Examining Sentence Processing with Novel Words. Ph.D. thesis, Rutgers University.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "ADAM: A Method for Stochastic Optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Lei Ba. 2015. ADAM: A Method for Stochastic Optimization. In ICLR.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Simple and accurate dependency parsing using bidirectional LSTM feature representations. TACL",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "4",
"issue": "",
"pages": "313--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional LSTM feature representations. TACL, 4:313- 327.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accu- rate unlexicalized parsing. In ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "From raw text to universal dependencies -look",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Eliyahu",
"middle": [],
"last": "Basirat",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miryam de Lhoneux, Yan Shao, Ali Basirat, Eliyahu Kiperwasser, Sara Stymne, Yoav Goldberg, and Joakim Nivre. 2017. From raw text to universal de- pendencies -look, no tags! In CoNLL Shared Task.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1064--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In ACL, pages 1064-1074, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Lexical nature of syntactic ambiguity resolution",
"authors": [
{
"first": "Maryellen",
"middle": [
"C"
],
"last": "Macdonald",
"suffix": ""
},
{
"first": "Neal",
"middle": [
"J"
],
"last": "Pearlmutter",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"S"
],
"last": "Seidenberg",
"suffix": ""
}
],
"year": 1994,
"venue": "Psychological Review",
"volume": "101",
"issue": "",
"pages": "676--703",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maryellen C. MacDonald, Neal J. Pearlmutter, and Mark S. Seidenberg. 1994. Lexical nature of syn- tactic ambiguity resolution. Psychological Review, 101:676-703.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Building a large annotated corpus of english: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The NomBank project: An interim report",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Ruth",
"middle": [],
"last": "Reeves",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Macleod",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Szekely",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Zielinska",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Workshop on Frontiers in Corpus Annotation at HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. The NomBank project: An interim report. In Proceedings of the Workshop on Frontiers in Corpus Annotation at HLT-NAACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A novel neural network model for joint pos tagging and graph-based dependency parsing",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2017,
"venue": "CoNLL Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Mark Dras, and Mark Johnson. 2017. A novel neural network model for joint pos tagging and graph-based dependency parsing. In CoNLL Shared Task.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Yuji Matsumoto",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Jesus"
],
"last": "Aranzabe",
"suffix": ""
},
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "Aitziber",
"middle": [],
"last": "Atutxa",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Kepa",
"middle": [],
"last": "Bengoetxea",
"suffix": ""
},
{
"first": "Riyaz",
"middle": [
"Ahmad"
],
"last": "Bhat",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [
"G A"
],
"last": "Celano",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Arantza",
"middle": [],
"last": "Diaz De Ilarraza",
"suffix": ""
},
{
"first": "Kaja",
"middle": [],
"last": "Dobrovoljc",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Toma\u017e",
"middle": [],
"last": "Erjavec",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Galbraith",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Iakes",
"middle": [],
"last": "Goenaga",
"suffix": ""
},
{
"first": "Koldo",
"middle": [],
"last": "Gojenola",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Berta",
"middle": [],
"last": "Gonzales",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Guillaume",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Dag",
"middle": [],
"last": "Haug",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Ion",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Irimia",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Johannsen",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Kanayama",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Krek",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Laippala",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci ; Sampo Pyysalo",
"suffix": ""
},
{
"first": "Loganathan",
"middle": [],
"last": "Ramasamy",
"suffix": ""
},
{
"first": "Rudolf",
"middle": [],
"last": "Rosa",
"suffix": ""
},
{
"first": "Shadi",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Seeker",
"suffix": ""
}
],
"year": null,
"venue": "Mojgan Seraji",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre,\u017deljko Agi\u0107, Maria Jesus Aranzabe, Masayuki Asahara, Aitziber Atutxa, Miguel Balles- teros, John Bauer, Kepa Bengoetxea, Riyaz Ah- mad Bhat, Cristina Bosco, Sam Bowman, Giuseppe G. A. Celano, Miriam Connor, Marie-Catherine de Marneffe, Arantza Diaz de Ilarraza, Kaja Do- brovoljc, Timothy Dozat, Toma\u017e Erjavec, Rich\u00e1rd Farkas, Jennifer Foster, Daniel Galbraith, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Yoav Goldberg, Berta Gonzales, Bruno Guillaume, Jan Haji\u010d, Dag Haug, Radu Ion, Elena Irimia, An- ders Johannsen, Hiroshi Kanayama, Jenna Kan- erva, Simon Krek, Veronika Laippala, Alessan- dro Lenci, Nikola Ljube\u0161i\u0107, Teresa Lynn, Christo- pher Manning, C\u0203t\u0203lina M\u0203r\u0203nduc, David Mare\u010dek, H\u00e9ctor Mart\u00ednez Alonso, Jan Ma\u0161ek, Yuji Mat- sumoto, Ryan McDonald, Anna Missil\u00e4, Verginica Mititelu, Yusuke Miyao, Simonetta Montemagni, Shunsuke Mori, Hanna Nurmi, Petya Osenova, Lilja \u00d8vrelid, Elena Pascual, Marco Passarotti, Cenel- Augusto Perez, Slav Petrov, Jussi Piitulainen, Bar- bara Plank, Martin Popel, Prokopis Prokopidis, Sampo Pyysalo, Loganathan Ramasamy, Rudolf Rosa, Shadi Saleh, Sebastian Schuster, Wolfgang Seeker, Mojgan Seraji, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk\u00f3, Kiril Simov, Aaron Smith, Jan\u0160t\u011bp\u00e1nek, Alane Suhr, Zsolt",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Universal dependencies 1.2. LIN-DAT/CLARIN digital library at the Institute of Formal and Applied Linguistics",
"authors": [
{
"first": "Takaaki",
"middle": [],
"last": "Sz\u00e1nt\u00f3",
"suffix": ""
},
{
"first": "Reut",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "Sumire",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Larraitz",
"middle": [],
"last": "Uematsu",
"suffix": ""
},
{
"first": "Viktor",
"middle": [],
"last": "Uria",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Varga",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zden\u011bk\u017eabokrtsk\u00fd",
"suffix": ""
},
{
"first": "Hanzhi",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sz\u00e1nt\u00f3, Takaaki Tanaka, Reut Tsarfaty, Sumire Ue- matsu, Larraitz Uria, Viktor Varga, Veronika Vincze, Zden\u011bk\u017dabokrtsk\u00fd, Daniel Zeman, and Hanzhi Zhu. 2015. Universal dependencies 1.2. LIN- DAT/CLARIN digital library at the Institute of For- mal and Applied Linguistics, Charles University.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The proposition bank: A corpus annotated with semantic roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Dan Gildea, and Paul Kingsbury. 2005. The proposition bank: A corpus annotated with se- mantic roles. Computational Linguistics, 31(1).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Deep multitask learning for semantic dependency parsing",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Peng, Sam Thomson, and Noah A. Smith. 2017. Deep multitask learning for semantic dependency parsing. In ACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning joint semantic parsers from disjoint data",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. 2018. Learning joint semantic parsers from disjoint data. In NAACL.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Valency-augmented dependency parsing",
"authors": [
{
"first": "Tianze",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianze Shi and Lillian Lee. 2018. Valency-augmented dependency parsing. In EMNLP. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15:1929-1958.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Localization of syntactic comprehension by positron emission tomography",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Stromswold",
"suffix": ""
},
{
"first": "David",
"middle": [
"O"
],
"last": "Caplan",
"suffix": ""
},
{
"first": "Nathaniel",
"middle": [
"M"
],
"last": "Alpert",
"suffix": ""
},
{
"first": "Scott",
"middle": [
"L"
],
"last": "Rauch",
"suffix": ""
}
],
"year": 1996,
"venue": "Brain and language",
"volume": "52",
"issue": "",
"pages": "452--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Stromswold, David O. Caplan, Nathaniel M. Alpert, and Scott L. Rauch. 1996. Localization of syntactic comprehension by positron emission to- mography. Brain and language, 52 3:452-73.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Toward a lexicalist framework for constraint-based syntactic ambiguity resolution",
"authors": [
{
"first": "C",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"K"
],
"last": "Trueswell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tanenhaus",
"suffix": ""
}
],
"year": 1994,
"venue": "Perspectives on Sentence Processing",
"volume": "",
"issue": "",
"pages": "155--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John C. Trueswell and Michael K. Tanenhaus. 1994. Toward a lexicalist framework for constraint-based syntactic ambiguity resolution. In Charles Clifton, Jr., Lyn Frazier, and Keith Rayner, editors, Per- spectives on Sentence Processing, pages 155-179.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "CoNLL 2017 shared task: Multilingual parsing from raw text to universal dependencies",
"authors": [
{
"first": "Tatiana",
"middle": [],
"last": "Mendonca",
"suffix": ""
},
{
"first": "Rattima",
"middle": [],
"last": "Lando",
"suffix": ""
},
{
"first": "Josie",
"middle": [],
"last": "Nitisaroj",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mendonca, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Mul- tilingual parsing from raw text to universal depen- dencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 1-19, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Biaffine parsing architecture. W and p denote the word and POS embeddings.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "POS Prediction Results from the Word Embedding Induced in Varying Uniform Dropout Rates.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "). Concretely, for each PTB POS category, we pick the 50 most frequent words of that category in the training set and replace each word w in the test set by a word uniformly drawn from the 50 most frequent words for w's POS category. We consider three situations: 1) full colorless green experiments where all words are replaced by random words, 2) top 100 colorless green experiments where all words but the 100 most frequent words are replaced by random words, and 3) open class colorless green experiments where the input word is replaced by a random word if and only if the word is an open class word. 1",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF1": {
"text": "). 2 Such a violation of argument structure constraints could mislead parsers (as well as language models studied inGulordava et al. (2018)) and we will show that is indeed the case. 3 To address this issue, we also experiment with jabberwocky noise, in which input word vectors are zeroed out. This noise is equivalent to replacing words with an out-of-vocabulary word by construction. Because fine-grained POS information is retained in the input to the parser, the parser is still able to benefit from the kind of",
"type_str": "table",
"html": null,
"content": "<table><tr><td>morphological information present in Carroll's</td></tr><tr><td>poem. We again consider three situations 1)</td></tr><tr><td>full jabberwocky experiments where all word</td></tr><tr><td>embeddings are zeroed out, 2) top 100 jabber-</td></tr><tr><td>wocky experiments where word embeddings for</td></tr><tr><td>all but the most frequent 100 words are zeroed</td></tr><tr><td>out, and 3) open class jabberwocky experiments</td></tr><tr><td>where the input word vector is zeroed out if and</td></tr><tr><td>only if the word is an open class word. Open</td></tr><tr><td>class jabberwocky experiments are the closest to</td></tr><tr><td>the situation when humans read Lewis Carroll's Jabberwocky. 4</td></tr></table>",
"num": null
},
"TABREF2": {
"text": ".2 92.30.2 92.70.1 90.60.1 Unlexicalized 88.00.1 85.40.1 87.10.1 83.80.1 Top 100 92.50.1 90.80.1 91.70.1 89.20.1 Function 90.70.4 88.10.6 90.00.3 86.80.5 Uniform 0.2 93.90.1 92.60.1 93.00.1 90.90.2 Uniform 0.4 94.00.1 92.50.1 93.00.1 90.80.1",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Model</td><td>UAS</td><td>Gold</td><td>LAS</td><td>Predicted UAS LAS</td></tr><tr><td colspan=\"5\">No Dropout 93.60Uniform 0.6 93.70.1 92.20.1 92.70.1 90.50.1 Uniform 0.8 93.00.1 91.40.1 92.10.1 89.70.2 Freq 0.25 93.70.1 92.40.1 92.90.1 90.80.1 Freq 1 93.90.1 92.60.1 93.00.1 91.00.1 Freq 40 94.00.2 92.60.2 93.00.2 90.90.2 Freq 352 93.60.1 92.20.1 92.70.1 90.50.1 Freq 2536 92.90.1 91.40.1 92.00.1 89.70.1 Open Cl 0.38 93.90.1 92.50.2 93.00.1 90.90.2 Open Cl 0.75 93.50.1 92.10.1 92.70.1 90.50.1</td></tr></table>",
"num": null
},
"TABREF3": {
"text": "Normal Parsing Results on the Dev Set. The subscripts indicate the standard deviations.",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF4": {
"text": ".2 56.30.1 51.92.3 39.11.9 Unlexicalized 88.00.1 85.40.1 88.00.1 85.40.2 Top 100 71.70.4 67.10.3 72.70.3 68.20.5 Function 69.20.9 62.10.7 58.83.0 39.83.4 Uniform 0.2 74.00.2 69.10.2 85.70.3 82.70.3",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Model</td><td>Colorless UAS LAS</td><td>Jabberwocky UAS LAS</td></tr><tr><td colspan=\"3\">No Dropout 62.60Uniform 0.4 76.90.3 72.30.2 87.10.1 84.30.1 Uniform 0.6 79.20.2 75.00.2 87.70.1 85.00.1 Uniform 0.8 82.00.3 78.50.3 88.00.1 85.40.1 Freq 0.25 62.90.2 56.40.1 55.01.4 43.42.5 Freq 1 63.60.4 57.10.1 60.11.7 48.83.6 Freq 40 67.50.7 61.60.6 76.41.0 72.01.2 Freq 352 74.50.5 69.70.5 82.90.4 79.50.6 Freq 2536 82.60.2 78.80.3 86.50.4 85.40.2 Open Cl 0.38 65.00.5 58.80.4 53.72.6 36.83.6 Open Cl 0.75 66.70.2 60.50.3 53.81.0 34.01.3</td></tr></table>",
"num": null
},
"TABREF5": {
"text": "Full Colorless Green and Jabberwocky Experiments on the Dev Set.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Model</td><td>Colorless UAS LAS</td><td>Jabberwocky UAS LAS</td></tr><tr><td colspan=\"3\">No Dropout Unlexicalized 88.00.1 85.40.1 88.00.1 85.40.2 85.50.1 82.70.1 86.40.8 83.40.9 Top 100 92.50.1 90.80.1 92.50.1 90.80.1 Function 88.70.4 85.40.7 90.80.3 88.00.6 Uniform 0.2 87.50.2 84.90.2 90.20.6 88.20.6 Uniform 0.4 88.50.2 86.00.2 90.80.4 88.90.4 Uniform 0.6 89.20.2 86.80.1 91.00.3 89.10.2 Uniform 0.8 89.70.1 87.40.2 90.60.2 88.60.2 Freq 0.25 85.80.2 83.00.2 87.80.6 85.00.6 Freq 1 86.10.1 83.30.1 88.90.4 86.30.4 Freq 40 88.10.2 85.50.1 90.90.2 88.70.4 Freq 352 89.70.1 87.40.1 91.90.2 90.00.2 Freq 2536 90.70.2 88.60.3 91.30.2 89.30.3 Open Cl 0.38 88.60.3 86.20.3 90.60.2 88.30.2 Open Cl 0.75 89.60.1 87.40.2 90.80.3 88.00.6</td></tr></table>",
"num": null
},
"TABREF6": {
"text": "Top 100 Colorless Green and Jabberwocky Experiments on the Dev Set.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Model</td><td>Colorless UAS LAS</td><td>Jabberwocky UAS LAS</td></tr><tr><td colspan=\"3\">No Dropout Unlexicalized 88.00.1 85.40.1 88.00.1 85.40.2 84.10.3 81.30.2 84.90.6 82.10.6 Top 100 90.50.2 88.40.2 91.80.1 89.90.1 Function 90.70.4 88.10.6 90.70.4 88.10.6 Uniform 0.2 87.40.2 84.80.2 89.70.7 87.80.7 Uniform 0.4 88.30.2 85.90.3 90.60.4 88.70.4 Uniform 0.6 89.20.2 86.80.2 90.90.3 89.00.3 Uniform 0.8 89.90.2 87.70.1 90.50.3 88.60.2 Freq 0.25 84.70.2 82.00.2 86.70.5 84.30.6 Freq 1 85.20.4 82.40.4 88.10.6 85.90.6 Freq 40 87.70.2 85.20.2 91.20.3 89.20.3 Freq 352 89.50.2 87.30.1 92.00.1 90.20.1 Freq 2536 90.70.2 88.70.2 91.70.2 89.90.1 Open Cl 0.38 89.00.2 86.70.3 92.10.2 90.30.2 Open Cl 0.75 90.70.1 88.40.2 92.40.1 90.70.1</td></tr></table>",
"num": null
},
"TABREF7": {
"text": "Open Class Colorless Green and Jabberwocky Experiments on the Dev Set.",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Model</td><td>UAS</td><td>Gold</td><td>LAS</td><td>Predicted UAS LAS</td></tr><tr><td colspan=\"5\">No Dropout Unlexicalized 83.00.1 89.70.1 Top 100 89.40.1 Function 88.30.3 Uniform 0.2 90.00.1 Uniform 0.4 90.10.1 Uniform 0.6 90.10.1 Uniform 0.8 89.40.1 Freq 0.25 89.90.2 Freq 1 90.20.2 Freq 40 90.70.2 88.50.3 89.80.2 86.70.2 87.50.1 88.80.1 85.70.1 79.30.2 81.30.1 76.70.1 86.80.1 88.50.1 84.90.1 84.80.7 87.40.3 83.10.7 87.80.2 89.10.1 86.00.2 87.90.1 89.30.1 86.10.1 87.70.1 89.10.1 85.80.1 86.80.1 88.50.1 85.00.1 87.70.2 89.10.2 86.00.2 88.10.2 89.30.1 86.30.1 Freq 352 90.30.1 88.00.1 89.30.1 86.10.1 Freq 2536 89.30.2 86.80.2 88.40.1 85.00.2 Open Cl 0.38 90.30.2 88.10.2 89.40.1 86.20.2 Open Cl 0.75 90.30.1 88.00.1 87.40.3 86.10.1</td></tr></table>",
"num": null
},
"TABREF8": {
"text": "Brown CF Results.",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF10": {
"text": "Performance Breakdown by Dependency Relation.",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}