ACL-OCL / Base_JSON /prefixQ /json /Q14 /Q14-1026.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q14-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:11:05.564618Z"
},
"title": "Improved CCG Parsing with Semi-supervised Supertagging",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh Edinburgh",
"location": {
"postCode": "EH8 9AB",
"country": "UK"
}
},
"email": "mike.lewis@ed.ac.uk"
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh Edinburgh",
"location": {
"postCode": "EH8 9AB",
"country": "UK"
}
},
"email": "steedman@inf.ed.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Current supervised parsers are limited by the size of their labelled training data, making improving them with unlabelled data an important goal. We show how a state-of-theart CCG parser can be enhanced, by predicting lexical categories using unsupervised vector-space embeddings of words. The use of word embeddings enables our model to better generalize from the labelled data, and allows us to accurately assign lexical categories without depending on a POS-tagger. Our approach leads to substantial improvements in dependency parsing results over the standard supervised CCG parser when evaluated on Wall Street Journal (0.8%), Wikipedia (1.8%) and biomedical (3.4%) text. We compare the performance of two recently proposed approaches for classification using a wide variety of word embeddings. We also give a detailed error analysis demonstrating where using embeddings outperforms traditional feature sets, and showing how including POS features can decrease accuracy.",
"pdf_parse": {
"paper_id": "Q14-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "Current supervised parsers are limited by the size of their labelled training data, making improving them with unlabelled data an important goal. We show how a state-of-theart CCG parser can be enhanced, by predicting lexical categories using unsupervised vector-space embeddings of words. The use of word embeddings enables our model to better generalize from the labelled data, and allows us to accurately assign lexical categories without depending on a POS-tagger. Our approach leads to substantial improvements in dependency parsing results over the standard supervised CCG parser when evaluated on Wall Street Journal (0.8%), Wikipedia (1.8%) and biomedical (3.4%) text. We compare the performance of two recently proposed approaches for classification using a wide variety of word embeddings. We also give a detailed error analysis demonstrating where using embeddings outperforms traditional feature sets, and showing how including POS features can decrease accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Combinatory Categorial Grammar (CCG) is widely used in natural language semantics (Bos, 2008; Kwiatkowski et al., 2010; Krishnamurthy and Mitchell, 2012; Lewis and Steedman, 2013a; Lewis and Steedman, 2013b; Kwiatkowski et al., 2013) , largely because of its direct linkage of syntax and semantics. However, this connection means that performance on semantic applications is highly dependent on the quality of the syntactic parse. Although CCG parsers perform at state-of-the-art levels Nivre et al., 2010) , full-sentence accuracy is just 25.6% on Wikipedia text, which gives a low upper bound on logical inference approaches to question-answering and textual entailment.",
"cite_spans": [
{
"start": 82,
"end": 93,
"text": "(Bos, 2008;",
"ref_id": "BIBREF5"
},
{
"start": 94,
"end": 119,
"text": "Kwiatkowski et al., 2010;",
"ref_id": "BIBREF21"
},
{
"start": 120,
"end": 153,
"text": "Krishnamurthy and Mitchell, 2012;",
"ref_id": "BIBREF20"
},
{
"start": 154,
"end": 180,
"text": "Lewis and Steedman, 2013a;",
"ref_id": "BIBREF23"
},
{
"start": 181,
"end": 207,
"text": "Lewis and Steedman, 2013b;",
"ref_id": "BIBREF24"
},
{
"start": 208,
"end": 233,
"text": "Kwiatkowski et al., 2013)",
"ref_id": "BIBREF22"
},
{
"start": 487,
"end": 506,
"text": "Nivre et al., 2010)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Supertags are rich lexical categories that go beyond POS tags by encoding information about predicate-argument structure. Supertagging is \"almost parsing\", and is used by parsers based on strongly lexicalized formalisms such as CCG and TAG to improve accuracy and efficiency, by delegating many of the parsing decisions to finite-state models (Bangalore and Joshi, 1999) . A disadvantage of this approach is that larger sets of lexical categories mean increased sparsity, decreasing tagging accuracy. As large amounts of labelled data are unlikely to be made available, recent work has explored using unlabelled data to improve parser lexicons (Thomforde and Steedman, 2011; Deoskar et al., 2011; Deoskar et al., 2014) . However, existing work has failed to improve the overall accuracy of state-of-the-art supervised parsers in-domain.",
"cite_spans": [
{
"start": 343,
"end": 370,
"text": "(Bangalore and Joshi, 1999)",
"ref_id": "BIBREF3"
},
{
"start": 644,
"end": 674,
"text": "(Thomforde and Steedman, 2011;",
"ref_id": "BIBREF38"
},
{
"start": 675,
"end": 696,
"text": "Deoskar et al., 2011;",
"ref_id": "BIBREF13"
},
{
"start": 697,
"end": 718,
"text": "Deoskar et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another strand of recent work has explored using unsupervised word embeddings as features in supervised models (Turian et al., 2010; Collobert et al., 2011b) , largely motivated as a simpler and more general alternative to standard feature sets. We apply similar techniques to CCG supertagging, hypothesising that words which are close in the embedding space will have similar supertags. Most existing work has focused on flat tagging tasks, and has not produced state-of-the-art results on structured prediction tasks like parsing (Collobert, 2011; Andreas and Klein, 2014) . CCG's lexicalized nature provides a simple and elegant solution to treating parsing as a flat tagging task, as the lexical categories encode information about hierarchical structure.",
"cite_spans": [
{
"start": 111,
"end": 132,
"text": "(Turian et al., 2010;",
"ref_id": "BIBREF39"
},
{
"start": 133,
"end": 157,
"text": "Collobert et al., 2011b)",
"ref_id": "BIBREF10"
},
{
"start": 532,
"end": 549,
"text": "(Collobert, 2011;",
"ref_id": "BIBREF11"
},
{
"start": 550,
"end": 574,
"text": "Andreas and Klein, 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As well as improving parsing accuracy, our model has a number of advantages over current CCG parsing work. Our supertagger does not make use of a POS-tagger, a fact which simplifies the model architecture, reduces the number of parameters, and eliminates errors caused by a pipeline approach. Also, learning word embeddings is an active area of research, and future developments may directly lead to better parsing accuracy, with no change required to our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The widely-used C&C parser (Clark and Curran, 2007) for CCG takes a pipeline approach, where first sentences are POS-tagged, then supertagged, and then parsed. The supertagger outputs a distribution over tags for each word, and a beam is used to aggressively prune supertags to reduce the parser search space. If the parser is unable to find a parse with a given set of supertags, the beam is relaxed. This approach is known as adaptive supertagging.",
"cite_spans": [
{
"start": 27,
"end": 51,
"text": "(Clark and Curran, 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 CCG Parsing",
"sec_num": "2"
},
{
"text": "The pipeline approach has two major drawbacks. Firstly, the use of a POS-tagger can overly prune the search space for the supertagger. Whilst POStaggers have an accuracy of around 97% in domain, this drops to just 93.4% on biomedical text , meaning that most sentences will contain an erroneous POS-tag. The supertagger model is overly dependent on POS-features-in Section 4.6 we show that supertagger performance drops dramatically on words which have been assigned an incorrect POS-tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 CCG Parsing",
"sec_num": "2"
},
{
"text": "Secondly, both the POS-tagger and supertagger are highly reliant on lexical features, meaning that performance drops both on unknown words, and words used differently from the training data. Many common words do not appear at all in the training data of the Penn Treebank, such as ten, militants, insight, and teenager. Many others are not seen with all their possible uses-for example European only occurs as an adjective, never a noun, meaning that the C&C parser is unable to analyse simple sentences like The director of the IMF is traditionally a European. These problems are particularly acute when parsing other domains .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 CCG Parsing",
"sec_num": "2"
},
{
"text": "Recent work has explored using vector space embeddings for words as features in supervised models for a variety of tasks, such as POS-tagging, chunking, named-entity recognition, semantic role labelling, and phrase structure parsing (Turian et al., 2010; Collobert et al., 2011b; Collobert, 2011; Socher et al., 2013) . The major motivation for using these techniques has been to minimize the level of task-specific feature engineering required, as the same feature set can lead to good results on a variety of tasks. Performance varies between tasks, but any gains over state-of-the-art traditional features have been small. A variety of techniques have been used for learning such embeddings from large unlabelled corpora, such as neural-network language models.",
"cite_spans": [
{
"start": 233,
"end": 254,
"text": "(Turian et al., 2010;",
"ref_id": "BIBREF39"
},
{
"start": 255,
"end": 279,
"text": "Collobert et al., 2011b;",
"ref_id": "BIBREF10"
},
{
"start": 280,
"end": 296,
"text": "Collobert, 2011;",
"ref_id": "BIBREF11"
},
{
"start": 297,
"end": 317,
"text": "Socher et al., 2013)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi Supervised NLP using Word Embeddings",
"sec_num": "2.2"
},
{
"text": "We introduce models for predicting CCG lexical categories based on vector-space embeddings. The models can then be used to replace the POStagging and supertagging stages used by existing CCG parsers. We experiment with the neural network model proposed by Collobert et al. (2011b) , and conditional random field (CRF) model used by Turian et al. (2010) . We only use features that can be expected to work well out-of-domain-in particular, we use no lexical or POS features.",
"cite_spans": [
{
"start": 256,
"end": 280,
"text": "Collobert et al. (2011b)",
"ref_id": "BIBREF10"
},
{
"start": 332,
"end": 352,
"text": "Turian et al. (2010)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "Our features are similar to those used by Collobert et al. (2011b) for POS-tagging. For every word in a context window, we add features for the embedding of the word, its 2-character suffix, and whether or not it is capitalised. We expect such features to generalize well to other domains-and in Section 4.5 we show that adding traditional POS-tag and lexical features does not help. To further reduce sparsity, we apply some simple preprocessing techniques. Words are lower-cased 1 , and all digits are replaced with 0. If an unknown word is hyphenated, we first try backing-off to the substring after the hyphen. Words which do not have an entry in the word embeddings share an 'unknown' embedding. Different 'unknown' vectors are used for capitalized and uncapitalized words, and non-alphabetic symbols. We also add entries for context words which are before the start and after the end of the sentence. All of these were initialized to the 'unknown' vector in the pre-trained embeddings (or with Gaussian noise if not available).",
"cite_spans": [
{
"start": 42,
"end": 66,
"text": "Collobert et al. (2011b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.1"
},
{
"text": "We predict word supertags with the neural network classifier used by Collobert et al. (2011b) for POStagging, as shown in Figure 1 . Each feature is implemented as a lookup The first hidden layer therefore contains C \u00d7(D+ KF ) nodes, where F is the number of discrete features and C is the size of the context window. We also experimented adding an additional hidden layer, with a hard-tanh activation function, which makes the classifier non-linear. Finally, a logistic softmax layer is used for classifying output categories.",
"cite_spans": [
{
"start": 69,
"end": 93,
"text": "Collobert et al. (2011b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 122,
"end": 130,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Network Model",
"sec_num": "3.2"
},
{
"text": "The model is trained using stochastic gradient descent, with a learning rate of 0.01, optimizing for cross-entropy. We use early-stopping as an alternative to regularization-after each iteration the model is evaluated for accuracy on held-out data, and we use the best performing model. Training was run until performance decreased on held-out data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Model",
"sec_num": "3.2"
},
{
"text": "The neural network model treats the probability of each supertag as being conditionally independent. However, conditioning on surrounding supertags Table 1 : Embeddings used in our experiments. Dimensionality is the set of dimensions of the word embedding space that we experimented with, and Training Words refers to the size of the unlabelled corpus the embeddings were trained on.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "CRF Model",
"sec_num": "3.3"
},
{
"text": "may be very useful-for example, a noun is much more likely to follow an adjective than a verb. Curran et al. 2006report a large improvement using a maximum-entropy Markov model for supertagging, conditioned on the surrounding supertags. We follow Turian et al. (2010) in using a linear chain CRF model for sequence classification using embeddings as features. This model does not allow supervised training to fine-tune the embeddings, though it would be possible to build a CRF/NN hybrid that enabled this. We use the same feature set as with the neural network model-so the probability of a category depends on embeddings, capitalization and suffix features-as well as the previous category. The model is trained using the averagedperceptron algorithm (Collins, 2002) , again using early-stopping based on development data accuracy.",
"cite_spans": [
{
"start": 247,
"end": 267,
"text": "Turian et al. (2010)",
"ref_id": "BIBREF39"
},
{
"start": 753,
"end": 768,
"text": "(Collins, 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CRF Model",
"sec_num": "3.3"
},
{
"text": "We experiment with three domains:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domains",
"sec_num": "4.1"
},
{
"text": "\u2022 CCGBank (Hockenmaier and Steedman, 2007) , which is a conversion of the Penn Treebank (Marcus et al., 1993) to CCG. Section 23 is used for evaluation.",
"cite_spans": [
{
"start": 10,
"end": 42,
"text": "(Hockenmaier and Steedman, 2007)",
"ref_id": "BIBREF16"
},
{
"start": 88,
"end": 109,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domains",
"sec_num": "4.1"
},
{
"text": "\u2022 Wikipedia, using the corpus of 200 sentences annotated with CCG derivations by Honnibal et al. (2009) . As the text is out-of-domain, parsing accuracy drops substantially on this corpus.",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "Honnibal et al. (2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domains",
"sec_num": "4.1"
},
{
"text": "\u2022 Biomedical text, which is even less related to the newswire text than Wikipedia, due to large numbers of unseen words and different writing styles, causing low parsing accuracy. For parsing experiments, we use the Bioinfer corpus (Pyysalo et al., 2007) as a test set. For measuring supertagging accuracy, we use the CCG annotation produced by Rimell and Clark (2008) .",
"cite_spans": [
{
"start": 232,
"end": 254,
"text": "(Pyysalo et al., 2007)",
"ref_id": "BIBREF33"
},
{
"start": 345,
"end": 368,
"text": "Rimell and Clark (2008)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domains",
"sec_num": "4.1"
},
{
"text": "In this section, we explore how adjusting the parameters of our neural network model 2 affects 1-best lexical category accuracy on the Section 00 of CCG-Bank (all development was done on this data). The C&C supertagger achieves 91.5% accuracy on this task. The models were trained on Sections 02-21 of CCGBank, and the reported numbers are the best accuracy achieved on Section 00. As in Clark and Curran (2007) , all models use only the 425 most frequent categories in CCGBank.",
"cite_spans": [
{
"start": 388,
"end": 411,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Model Parameters",
"sec_num": "4.2"
},
{
"text": "A number of word embeddings have recently been released, aiming to capture a variety of syntactic and semantic phenomena, based on neural network language models (NNLMs) (Turian et al., 2010; Collobert et al., 2011b) , recurrent neural network language models (Mikolov, 2012) , the hierarchical log bilinear model (HLBL) (Mnih and Hinton, 2008) , and Mikolov et al. (2013) 's Skip Gram model. However, there has been a lack of experiments comparing which embeddings provide the most effective features for downstream tasks.",
"cite_spans": [
{
"start": 170,
"end": 191,
"text": "(Turian et al., 2010;",
"ref_id": "BIBREF39"
},
{
"start": 192,
"end": 216,
"text": "Collobert et al., 2011b)",
"ref_id": "BIBREF10"
},
{
"start": 260,
"end": 275,
"text": "(Mikolov, 2012)",
"ref_id": "BIBREF29"
},
{
"start": 321,
"end": 344,
"text": "(Mnih and Hinton, 2008)",
"ref_id": "BIBREF30"
},
{
"start": 351,
"end": 372,
"text": "Mikolov et al. (2013)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "4.2.1"
},
{
"text": "First, we investigated the performance of several publicly available embeddings, to find which was most effective for supertagging. The embeddings we used are summarized in hidden layer. We also investigate which size context window is most effective. Results are shown in Table 2 , and show that the choice of embeddings is crucial to performance on this task. The performance of the Turian and HLBL embeddings is surprisingly high given the relatively small amount of unlabelled data, suggesting that parameters other than the size of the corpus are more important. Of course, we have not performed a gridsearch of the parameter space, and it is possible that other embeddings would perform better with different training data, dimensionality, or model architectures. The Mikolov embeddings may suffer from being trained on broadcast news, which has no punctuation and different language use. Using a context window of 7 words generally outperformed using a window of 5 words (we also experimented with a 9 word window, but found performance decreased slightly to 91.2% for the Turian-50 embeddings). There is no clear trend on the optimal dimension of the embedding space, and it is likely to vary with training methods and corpus size.",
"cite_spans": [],
"ref_spans": [
{
"start": 273,
"end": 280,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "4.2.1"
},
{
"text": "Next, we experimented with the size of the additional hidden layer-for efficiency, using the Turian-50 embeddings with a 5-word context window. Results are shown in Table 3 , and suggest that a hidden layer is not useful for this task-possibly due to over-fitting. In all subsequent experiments we used a context window of 7 words, no additional hidden layer, and the Turian-50 embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 172,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "4.2.1"
},
{
"text": "We also experimented with the CRF model for supertagging 3 . Training these models took far longer than our neural-network model, due to the need to use the forward-backward algorithm with a 425\u00d7425-dimensional transition matrix during training (rather than considering each word's category independently). Consequently, we only experimented with the Turian-50 embeddings with a 7word context window, which attained the best performance using the neural network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF Model",
"sec_num": "4.3"
},
{
"text": "We found that using the Turian-50 embeddings gave a surprisingly weak performance of just 90.3% (compared to 91.3% for the neural network model). We hypothesised that one reason for this result could be that the model is unable to modify the embeddings during supervised training (in contrast to the neural-network model). Consequently, we built a new set of embeddings, using the weightmatrix learned by our best neural network model. A new CRF model was then trained using the tuned embeddings. Performance then improved dramatically to 91.5%, and slightly outperformed the neural network-showing that while there is a small advantage to using sequence information, it is crucial to allow supervised training to modify the embeddings. These results help explain why Collobert et al. (2011b)'s neural network models outperform Turian et al. (2010)'s sequence models-but greater improvements may be possible with the combined approach we introduce here, which allows the model to both tune the embeddings and exploit sequence information. However, tagging with this model was considerably slower than the neural network (again, due to the cost of decoding), so we used the neural network architecture in the remaining experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRF Model",
"sec_num": "4.3"
},
{
"text": "The C&C parser takes a set of supertags per word as input, which is used to prune the search space. If no parse is found, the sentence is supertagged again with a wider beam. The effectiveness of the pruning therefore depends on the accuracy of the supertagger at a given level of ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multitagging Accuracy",
"sec_num": "4.4"
},
{
"text": "We experimented with the accuracy of different supertaggers at different levels of ambiguity. For the C&C supertagger, we vary the number of categories per word using the same back-off beam settings reported in Clark and Curran (2007) . For our supertagger, we vary recall by adjusting the a variable-width beam, which removes tags whose probability is less than \u03b2 times that of the most likely tag.",
"cite_spans": [
{
"start": 211,
"end": 234,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multitagging Accuracy",
"sec_num": "4.4"
},
{
"text": "Results are shown in Figure 2 . The supertaggers based on embeddings consistently match or outperform the C&C supertagger at all levels of recall across all domains. While performance is similar with a small number of tags per word, our supertaggers perform better with a more relaxed beamperhaps representing cases which are challenging for the C&C model, such as POS-tag errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multitagging Accuracy",
"sec_num": "4.4"
},
{
"text": "We investigate whether our supertagger improves the performance of the C&C parser, by replacing the standard C&C supertagger with our model. This evaluation is somewhat indirect, as the parser does not make use of the supertagger probabilities for categories, but instead simply uses it to prune the search space. However, we show that better pruning leads directly to better parsing accuracies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Accuracy",
"sec_num": "4.5"
},
{
"text": "C&C parser results on CCGBank and Wikipedia are reported using Clark and Curran (2007) Table 4 : Parsing F1-scores for labelled dependencies across a range of domains, using the C&C parser with different supertaggers. Embeddings models used a context window of 7 words, and no additional hidden layer. Following previous CCG parsing work, we report F1-scores on the subset of sentences where the parser is able to produce a parse (F1-cov), and the parser's coverage (COV). Where available we also report overall scores (F1-all), including parser failures, which we believe gives a more realistic assessment. used in the published results. Biomedical results use the publicly available parsing model, setting the 'parser beam ratio' parameter to 10 \u22124 , which improved results on development data. To achieve full coverage on the Wikipedia corpus, we increased the 'max supercats' parameter to 10 7 . C&C accuracies differ very slightly from previously reported results, due to differences in the retrained models.",
"cite_spans": [
{
"start": 63,
"end": 86,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing Accuracy",
"sec_num": "4.5"
},
{
"text": "As in Clark and Curran (2007) , we use a variablewidth beam \u03b2 that prunes categories whose probability is less than \u03b2 times that of the most likely category. For simplicity, our supertaggers use the same \u03b2 back-off parameters as are used by the C&C parser, though it is possible that further improvements could be gained by carefully tuning these parameters. 5 In contrast to the C&C supertaggers, we do not make use a tag-dictionaries.",
"cite_spans": [
{
"start": 6,
"end": 29,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF7"
},
{
"start": 359,
"end": 360,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Accuracy",
"sec_num": "4.5"
},
{
"text": "Results are shown in Table 4 , and our supertaggers consistently lead to improvements over the baseline parser across all domains, with larger improvements out-of-domain. Our best model also outperforms Honnibal et al. (2009) 's self-training approach to domain adaptation on Wikipedia (which lowers performance on CCGBank).",
"cite_spans": [
{
"start": 203,
"end": 225,
"text": "Honnibal et al. (2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing Accuracy",
"sec_num": "4.5"
},
{
"text": "Our results show that word embeddings are an effective way of adding distributional information into CCG supertagging. A popular alternative approach for semi-supervised learning is to use Brown clusters (Brown et al., 1992) . To ensure a fair comparison with the Turian embeddings, we use clusters trained on the same corpus, and use a comparable feature set (clusters, capitalization, and 2character suffixes-all implemented as sparse binary features). Brown clusters are hierarchical, and following Koo et al. (2008) , we incorporate Brown clusters features at multiple levels of granularityusing 64 coarse clusters (loosely analogous to POStags) and 1000 fine-grained clusters. Results show slightly lower performance than C&C in domain, but higher performance out of domain. However, they are substantially lower than results using the Turian-50 embeddings.",
"cite_spans": [
{
"start": 204,
"end": 224,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF6"
},
{
"start": 502,
"end": 519,
"text": "Koo et al. (2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Accuracy",
"sec_num": "4.5"
},
{
"text": "We also experimented with adding traditional word and POS features, which were implemented as sparse vectors for each word in the context window. We found that including POS features (derived from the C&C POS-tagger) reduced accuracy across all domains. One reason is that POS tags are highly discriminative features, therefore errors can be hard to recover from. Adding lexical features for the most frequent 250 words had little impact on results, showing that the embeddings already represent this information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Accuracy",
"sec_num": "4.5"
},
{
"text": "For infrequent words, the C&C parser uses a hard constraint that only certain POS-tag/supertag combinations are allowed. This constraint means that the parser may be particularly vulnerable to POS-Words with incorrect POS Tag (3%) tag errors, as the model cannot override the hardconstraint. We therefore also ran the model allowing any POS-tag/supertag combination. We found that parsing accuracy was 0.02 higher on development data (and much slower), suggesting that the model itself is overly reliant on POS-features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Accuracy",
"sec_num": "4.5"
},
{
"text": "We have demonstrated that word embeddings are highly effective for CCG supertagging. In this section, we investigate several cases in which they are particularly helpful-by measuring supertagger performance when the POS tagger made mistakes, when words were unseen in the labelled data, and when the labelled data only contains the word with a different category. Our supertaggers show substantial improvements over more complex existing models. Figure 3 shows performance when the POStagger assigns the wrong tag to a word. Both systems show dramatically lower performance on these cases-the embeddings supertagger does not use POS features, but POS errors are likely to represent generally difficult examples. However, the embeddings supertagger is almost 15% more accurate on this subset than the C&C supertagger, and with a relaxed beam reaches 96% accuracy, showing the advantages of avoiding a pipeline approach. In contrast, the C&C tagger is not robust to POS tag-Words only seen with other categories (2%) ger errors, and asymptotes at just 82% accuracy. An alternative way of mitigating POS errors is to use a distribution over POS tags as features in the supertagger-Curran et al. (2006) show that this technique improves supertagging accuracy by 0.4% over the C&C baseline, but do not report the impact on parsing. Figure 4 shows performance when the a word has been seen in the training data, but only with a different category from the instance in the test data (for example, European only occurs as a adjective in the training data, but it may occur as a noun in the test data). Performance is even worse on these cases, which appear to be extremely difficult for existing models. The accuracy of the embeddings supertagger converges at just 80%, suggesting that our model has overfit the labelled data. However, it still outperforms the C&C supertagger by 22% with a beam allowing 2 tags per word. The large jump in C&C supertagger performance for the final back-off level is due to a change in the word frequency threshold at which the C&C parser only considers word/category pairs that occur in the labelled data. Figure 5 gives results for cases where the word is unseen in the labelled data. The C&C supertagger performance is surprisingly good on such cases, suggesting that the morphological and context used Unseen Words (4%) is normally sufficient for inferring categories for unseen words. However, our supertagger still clearly outperforms the C&C supertagger, suggesting that the large vocabulary of the unsupervised embeddings helps it to generalize from the labelled data.",
"cite_spans": [
{
"start": 1192,
"end": 1198,
"text": "(2006)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 446,
"end": 454,
"text": "Figure 3",
"ref_id": "FIGREF0"
},
{
"start": 1327,
"end": 1335,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 2132,
"end": 2140,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.6"
},
{
"text": "We also investigated supertagging accuracy on different types of word- Table 5 shows several interesting cases. While performance on nouns is similar, our supertagger is substantially better on verbs. Verbs can have many different CCG categories, depending on the number of arguments and tense, and not all valid word/category combinations will be seen in the labelled data. Our embeddings allow the supertagger to learn generalizations, such as that transitive verbs can also often have intransitive uses. Similarly, wh-words can have many possible categories in complex constructions like relative clauses and pied-piping-and our embeddings may help the model generalize from having seen a category for which to one for whom. On the other hand, the C&C supertagger performs much better on prepositions. Prepositions have different categories when appearing as arguments or adjuncts, and the distinction in the gold-standard was made using somewhat arbitrary heuristics (Hockenmaier and Steedman, 2007) . It seems our embeddings have failed to capture these subtleties. Future work should explore methods for combining the strengths of each model. 91.2% Table 5 : Supertagging accuracy on different types of words, with an ambiguity of 1.27 tags per word (corresponding to the C&C's initial beam setting). Overall performance with this beam is almost identical. Words types were identified using gold POS tags, using IN for prepositions.",
"cite_spans": [
{
"start": 971,
"end": 1003,
"text": "(Hockenmaier and Steedman, 2007)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 71,
"end": 78,
"text": "Table 5",
"ref_id": null
},
{
"start": 1155,
"end": 1162,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.6"
},
{
"text": "With a narrow supertagger beam, our method gives similar results to the C&C supertagger. However, it gains by being more accurate on difficult cases, due to not relying on lexical or POS features. These improvements lead directly to parser improvements. We identify two cases where our supertagger greatly outperforms the C&C parser: where the POS-tag is incorrect, and where the word-category pair is unseen in the labelled data. Our approach achieves larger improvements out-of-domain than in-domain, suggesting that the large vocabulary of embeddings built by the unsupervised pre-training allows it to better generalize from the labelled data. Interestingly, the best-performing Turian-50 embeddings were trained on just 37M words of text (compared to 100B words for the Skip-gram embeddings), suggesting that further improvements may well be possible using larger unlabelled corpora. Future work should investigate whether the models and embeddings that work well for supertagging generalize to other tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.7"
},
{
"text": "Many methods have recently been proposed for improving supervised parsers with unlabelled data. Most of these are orthogonal to our work, and larger improvements may be possible by combining them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Thomforde and Steedman (2011) extends a CCG lexicon by inferring categories for unseen words, based on the likely categories of surrounding words. Unlike our method, this approach is able to learn categories which were unseen in the labelled data, which is shown to be useful for parsing a corpus of questions. Deoskar et al. (2011) and Deoskar et al. (2014) use Viterbi-EM to learn new lexical entries by running a generative parser over a large unlabelled corpus. They show good improvements in accuracy on unseen words, but not overall parsing improvements in-domain. Their parsing model aims to capture non-local information about word usage, which would not be possible for the local context windows used to learn our embeddings.",
"cite_spans": [
{
"start": 311,
"end": 332,
"text": "Deoskar et al. (2011)",
"ref_id": "BIBREF13"
},
{
"start": 337,
"end": 358,
"text": "Deoskar et al. (2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Self-training is another popular method for domain adaptation, and was used successfully by Honnibal et al. (2009) to improve CCG parser performance on Wikipedia. However, it caused a decrease in performance on the in-domain data, and our method achieves better performance across all domains. McClosky et al. (2006) improve a Penn Treebank parser in-domain using self-training, but other work has failed to improve performance out-ofdomain using self training (Dredze et al., 2007) . In a similar spirit to our work, Koo et al. (2008) improve parsing accuracy using unsupervised word cluster features-we have shown that word-embeddings outperform Brown clusters for CCG supertagging.",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "Honnibal et al. (2009)",
"ref_id": "BIBREF17"
},
{
"start": 294,
"end": 316,
"text": "McClosky et al. (2006)",
"ref_id": "BIBREF27"
},
{
"start": 461,
"end": 482,
"text": "(Dredze et al., 2007)",
"ref_id": "BIBREF15"
},
{
"start": 518,
"end": 535,
"text": "Koo et al. (2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "An alternative approach to domain adaptation is to annotate a small corpus of out-of-domain text. Rimell and Clark (2008) argue that this annotation is simpler for lexicalized formalisms such as CCG, as large improvements can be gained from annotating lexical categories, rather than full syntax trees. They achieve higher parsing accuracies than us on biomedical text, but our unsupervised method requires no annotation. It seems likely that our method could be further improved by incorporating out-ofdomain labelled data (where available).",
"cite_spans": [
{
"start": 98,
"end": 121,
"text": "Rimell and Clark (2008)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The best reported CCG parsing results have been achieved with a model that integrates supertagging and parsing (Auli and Lopez, 2011a; Auli and Lopez, 2011b) . This work still uses the same feature set as the C&C parser, suggesting further improvements may be possible by using our embeddings features. Auli and Lopez POS-tag the sentence before parsing, but using our features would allow us to fully eliminate the current pipeline approach to CCG parsing.",
"cite_spans": [
{
"start": 111,
"end": 134,
"text": "(Auli and Lopez, 2011a;",
"ref_id": "BIBREF1"
},
{
"start": 135,
"end": 157,
"text": "Auli and Lopez, 2011b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Our work also builds on approaches to semisupervised NLP using neural embeddings (Turian et al., 2010; Collobert et al., 2011b) . Existing work has mainly focused on 'flat' tagging problems, without hierarchical structure. Collobert (2011) gives a model for parsing using embeddings features, by treating each level of the parse tree as a sequence classification problem. Socher et al. (2013) introduce a model in which context-free grammar parses are reranked based on compositional distributional representations for each node. Andreas and Klein (2014) experiment with a number of approaches to improving the Berkeley parser with word embeddings. Such work has not improved over state-of-the-art existing feature sets for constituency parsing-although Bansal et al. (2014) achieve good results for dependency parsing using embeddings. CCG categories contain much of the hierarchical structure needed for parsing, giving a simpler way to improve a parser using embeddings.",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "(Turian et al., 2010;",
"ref_id": "BIBREF39"
},
{
"start": 103,
"end": 127,
"text": "Collobert et al., 2011b)",
"ref_id": "BIBREF10"
},
{
"start": 223,
"end": 239,
"text": "Collobert (2011)",
"ref_id": "BIBREF11"
},
{
"start": 372,
"end": 392,
"text": "Socher et al. (2013)",
"ref_id": "BIBREF37"
},
{
"start": 530,
"end": 554,
"text": "Andreas and Klein (2014)",
"ref_id": "BIBREF0"
},
{
"start": 754,
"end": 774,
"text": "Bansal et al. (2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We have shown that CCG parsing can be significantly improved by predicting lexical categories based on unsupervised word embeddings. The resulting parsing pipeline is simpler, and has improved performance both in and out of domain. We expect further improvements to follow as better word embeddings are developed, without other changes to our model. Our approach reduces the problem of sparsity caused by the large number of CCG categories, suggesting that finer-grained categories could be created for CCGBank (in the spirit of Honnibal et al. (2010)), which lead to improved performance in downstream semantic parsers. Future work should also explore domain-adaptation, either using unsupervised embeddings trained on out-of-domain text, or using supervised training on out-of-domain corpora. Our results also have implications for other NLP tasks-suggesting that using word embeddings features may be particularly useful out-of-domain, in pipelines that currently rely on POS taggers, and in tasks which suffer from sparsity in the labelled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Code for our supertagger is released as part of the EASYCCG parser (Lewis and Steedman, 2014) , available from: https://github.com/ mikelewis0/easyccg",
"cite_spans": [
{
"start": 67,
"end": 93,
"text": "(Lewis and Steedman, 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Transactions of the Association for Computational Linguistics, 2 (2014) 327-338. Action Editor: Ryan McDonald. Submitted 4/2014; Revised 6/2014; Published 10/2014. c 2014 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For embeddings that include separate entries for the same word with different capitalization, we take the most frequently occurring version in the unlabelled corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Implemented using the Torch7 library(Collobert et al., 2011a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Implemented using CRFSuite(Okazaki, )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This model is not publicly available, so we re-trained it following the instructions at http://aclweb.org/aclwiki/ index.php?title=Training_the_C&C_Parser",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We briefly experimented setting the \u03b2 parameters to match the ambiguity of the C&C supertagger on Section 00 of CCGBank, which caused the F1-score using the Turian-50 embeddings to drop slightly from 86.11 to 85.95.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Tejaswini Deoskar, Bharat Ram Ambati, Michael Roth and the anonymous reviewers for helpful feedback on an earlier version of this paper. We also thank Rahul Jha for sharing his re-implementation of Collobert et al. (2011b) 's model, and Stephen Clark, Laura Rimell and Matthew Honnibal for making their out-ofdomain CCG corpora available.",
"cite_spans": [
{
"start": 221,
"end": 245,
"text": "Collobert et al. (2011b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "How much do word embeddings encode about syntax",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Andreas and Dan Klein. 2014. How much do word embeddings encode about syntax. In Proceedings of ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A comparison of loopy belief propagation and dual decomposition for integrated CCG supertagging and parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "470--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Auli and Adam Lopez. 2011a. A comparison of loopy belief propagation and dual decomposition for integrated CCG supertagging and parsing. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies-Volume 1, pages 470-480. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Training a loglinear parser with loss functions via softmax-margin",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "333--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Auli and Adam Lopez. 2011b. Training a log- linear parser with loss functions via softmax-margin. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing, pages 333-343. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Supertagging: An approach to almost parsing",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "2",
"pages": "237--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore and Aravind K Joshi. 1999. Su- pertagging: An approach to almost parsing. Compu- tational Linguistics, 25(2):237-265.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Tailoring continuous word representations for dependency parsing",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for depen- dency parsing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Wide-coverage semantic analysis with boxer",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2008,
"venue": "Conference Proceedings, Research in Computational Semantics",
"volume": "",
"issue": "",
"pages": "277--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan Bos. 2008. Wide-coverage semantic analysis with boxer. In Johan Bos and Rodolfo Delmonte, editors, Semantics in Text Processing. STEP 2008 Conference Proceedings, Research in Computational Semantics, pages 277-286. College Publications.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "P",
"middle": [
"V"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "V",
"middle": [
"J D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.F. Brown, P.V. Desouza, R.L. Mercer, V.J.D. Pietra, and J.C. Lai. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467-479.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Widecoverage efficient statistical parsing with CCG and log-linear models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "James R Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "4",
"pages": "493--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James R Curran. 2007. Wide- coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493-552.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing",
"volume": "10",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natu- ral language processing-Volume 10, pages 1-8. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Torch7: A Matlab-like environment for machine learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Cl\u00e9ment",
"middle": [],
"last": "Farabet",
"suffix": ""
}
],
"year": 2011,
"venue": "BigLearn, NIPS Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Koray Kavukcuoglu, and Cl\u00e9ment Farabet. 2011a. Torch7: A Matlab-like environment for machine learning. In BigLearn, NIPS Workshop.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011b. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493- 2537.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep learning for efficient discriminative parsing",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
}
],
"year": 2011,
"venue": "JMLR Proceedings",
"volume": "15",
"issue": "",
"pages": "224--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert. 2011. Deep learning for effi- cient discriminative parsing. In Geoffrey J. Gordon, David B. Dunson, and Miroslav Dudk, editors, AIS- TATS, volume 15 of JMLR Proceedings, pages 224- 232. JMLR.org.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multi-tagging for lexicalized-grammar parsing",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "James R Curran",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vadas",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "697--704",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James R Curran, Stephen Clark, and David Vadas. 2006. Multi-tagging for lexicalized-grammar parsing. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meet- ing of the Association for Computational Linguistics, pages 697-704. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning structural dependencies of words in the zipfian tail",
"authors": [
{
"first": "Tejaswini",
"middle": [],
"last": "Deoskar",
"suffix": ""
},
{
"first": "Markos",
"middle": [],
"last": "Mylonakis",
"suffix": ""
},
{
"first": "Khalil",
"middle": [],
"last": "Sima'an",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 12th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "80--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tejaswini Deoskar, Markos Mylonakis, and Khalil Sima'an. 2011. Learning structural dependencies of words in the zipfian tail. In Proceedings of the 12th International Conference on Parsing Technolo- gies, pages 80-91. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generalizing a Strongly Lexicalized Parser using Unlabeled Data",
"authors": [
{
"first": "Tejaswini",
"middle": [],
"last": "Deoskar",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tejaswini Deoskar, Christos Christodoulopoulos, Alexandra Birch, and Mark Steedman. 2014. Gener- alizing a Strongly Lexicalized Parser using Unlabeled Data. In Proceedings of the 14th Conference of the European Chapter of the Association for Compu- tational Linguistics. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Frustratingly hard domain adaptation for dependency parsing",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Partha",
"middle": [
"Pratim"
],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Graca",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1051--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Dredze, John Blitzer, Partha Pratim Talukdar, Kuz- man Ganchev, Joao Graca, and Fernando Pereira. 2007. Frustratingly hard domain adaptation for depen- dency parsing. In EMNLP-CoNLL, pages 1051-1055.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "CCGbank: a corpus of CCG derivations and dependency structures extracted from the Penn Treebank",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "3",
"pages": "355--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier and Mark Steedman. 2007. CCG- bank: a corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Compu- tational Linguistics, 33(3):355-396.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Evaluating a statistical CCG parser on Wikipedia",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Nothman",
"suffix": ""
},
{
"first": "James R",
"middle": [],
"last": "Curran",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources",
"volume": "",
"issue": "",
"pages": "38--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal, Joel Nothman, and James R Cur- ran. 2009. Evaluating a statistical CCG parser on Wikipedia. In Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Con- structed Semantic Resources, pages 38-41. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Rebanking CCGbank for improved NP interpretation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "207--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Honnibal, J.R. Curran, and J. Bos. 2010. Rebanking CCGbank for improved NP interpretation. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 207-215. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Simple semi-supervised dependency parsing",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Weakly supervised training of semantic parsers",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "12",
"issue": "",
"pages": "754--765",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jayant Krishnamurthy and Tom M. Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP- CoNLL '12, pages 754-765. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Inducing probabilistic CCG grammars from logical form with higher-order unification",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10",
"volume": "",
"issue": "",
"pages": "1223--1233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwa- ter, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10, pages 1223-1233. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Scaling semantic parsers with onthe-fly ontology matching",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1545--1556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on- the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1545-1556, Seattle, Wash- ington, USA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Combined Distributional and Logical Semantics",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "179--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis and Mark Steedman. 2013a. Combined Distributional and Logical Semantics. Transactions of the Association for Computational Linguistics, 1:179- 192.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unsupervised induction of cross-lingual semantic relations",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "681--692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis and Mark Steedman. 2013b. Unsupervised induction of cross-lingual semantic relations. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 681-692, Seattle, Washington, USA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A* CCG Parsing with a Supertag-factored Model",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis and Mark Steedman. 2014. A* CCG Pars- ing with a Supertag-factored Model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, Octo- ber.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated cor- pus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Effective self-training for parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the main conference on human language technology conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "152--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceed- ings of the main conference on human language tech- nology conference of the North American Chapter of the Association of Computational Linguistics, pages 152-159. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Statistical language models based on neural networks",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov. 2012. Statistical language models based on neural networks. Ph.D. thesis, Ph. D. the- sis, Brno University of Technology.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A scalable hierarchical distributed language model",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "1081--1088",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andriy Mnih and Geoffrey E Hinton. 2008. A scal- able hierarchical distributed language model. In NIPS, pages 1081-1088.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Evaluation of dependency parsers on unbounded dependencies",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Carlos G\u00f3mez",
"middle": [],
"last": "Rodr\u00edguez",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "833--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos G\u00f3mez Rodr\u00edguez. 2010. Evaluation of dependency parsers on unbounded dependencies. In Proceedings of the 23rd International Conference on Computa- tional Linguistics (Coling 2010), pages 833-841, Bei- jing, China, August. Coling 2010 Organizing Commit- tee.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "CRFsuite: a fast implementation of conditional random fields (CRFs)",
"authors": [
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naoaki Okazaki. CRFsuite: a fast implementation of conditional random fields (CRFs).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Bioinfer: a corpus for information extraction in the biomedical domain",
"authors": [
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Juho",
"middle": [],
"last": "Heimonen",
"suffix": ""
},
{
"first": "Jari",
"middle": [],
"last": "Bj\u00f6rne",
"suffix": ""
},
{
"first": "Jorma",
"middle": [],
"last": "Boberg",
"suffix": ""
},
{
"first": "Jouni",
"middle": [],
"last": "J\u00e4rvinen",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
}
],
"year": 2007,
"venue": "BMC bioinformatics",
"volume": "8",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sampo Pyysalo, Filip Ginter, Juho Heimonen, Jari Bj\u00f6rne, Jorma Boberg, Jouni J\u00e4rvinen, and Tapio Salakoski. 2007. Bioinfer: a corpus for information extraction in the biomedical domain. BMC bioinfor- matics, 8(1):50.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Adapting a lexicalized-grammar parser to contrasting domains",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "475--484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Rimell and Stephen Clark. 2008. Adapting a lexicalized-grammar parser to contrasting domains. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 475-484. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Porting a lexicalized-grammar parser to the biomedical domain",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Biomedical Informatics",
"volume": "42",
"issue": "5",
"pages": "852--865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Rimell and Stephen Clark. 2009. Porting a lexicalized-grammar parser to the biomedical domain. Journal of Biomedical Informatics, 42(5):852-865.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Unbounded dependency recovery for parser evaluation",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "813--821",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evalua- tion. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2, pages 813-821. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Parsing with compositional vector grammars",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the ACL conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013. Parsing with compositional vec- tor grammars. In In Proceedings of the ACL confer- ence. Citeseer.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Semisupervised CCG lexicon extension",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Thomforde",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1246--1256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Thomforde and Mark Steedman. 2011. Semi- supervised CCG lexicon extension. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pages 1246-1256. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Word representations: a simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384-394. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Ambiguity vs. Accuracy for different supertaggers, on words with incorrect POS tags."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Ambiguity vs. Accuracy for different supertaggers, on words that do occur in the training data, but not with the category required in the test data."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Ambiguity vs. Accuracy for different supertaggers, on words which are unseen in the labelled data."
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>features (2-character suffixes and capitalization) are</td></tr><tr><td>also each represented with lookup tables, which map</td></tr><tr><td>each feature onto a K dimensional vector (as in Col-</td></tr><tr><td>lobert et al. (2011b), we use K = 5). Lookup table</td></tr><tr><td>parameters for non-embeddings features are initial-</td></tr><tr><td>ized with Gaussian noise.</td></tr><tr><td>which maps context words</td></tr><tr><td>onto vectors. The same lookup table parameters are</td></tr><tr><td>used wherever a word appears in the context win-</td></tr><tr><td>dow.</td></tr><tr><td>Word embeddings are implemented with a lookup</td></tr><tr><td>table W \u2208 R V \u00d7D , where V is the size of the vo-cabulary, and D is the dimension of the word em-</td></tr><tr><td>beddings. The parameters of the lookup table are</td></tr><tr><td>initialized using unsupervised embeddings, but are</td></tr><tr><td>modified during supervised training.</td></tr><tr><td>As in Collobert et al. (2011b), non-embedding</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF3": {
"num": null,
"content": "<table><tr><td>Embeddings</td><td>Category</td><td>Category</td></tr><tr><td/><td>Accuracy</td><td>Accuracy</td></tr><tr><td/><td>(window=5)</td><td>(window=7)</td></tr><tr><td colspan=\"2\">Collobert&amp;Weston 90.0%</td><td>89.6%</td></tr><tr><td>Skip Gram</td><td>90.9%</td><td>91.0%</td></tr><tr><td>Turian-25</td><td>91.0%</td><td>91.1%</td></tr><tr><td>Turian-50 Turian-100</td><td>91.1% 91.0%</td><td>91.3% 91.1%</td></tr><tr><td>Turian-200</td><td>90.8%</td><td>90.7%</td></tr><tr><td>HLBL-50</td><td>90.9%</td><td>91.2%</td></tr><tr><td>HLBL-100 Mikolov-80</td><td>91.1% 87.2%</td><td>91.3% 88.1%</td></tr><tr><td>Mikolov-640</td><td>87.9%</td><td>88.4%</td></tr></table>",
"text": "For efficiency, we used our simplest architecture, with no additional",
"html": null,
"type_str": "table"
},
"TABREF4": {
"num": null,
"content": "<table/>",
"text": "Comparison of different embeddings and context windows on Section 00 of CCGBank. Abbreviations such as Turian-50 refer to the Turian embeddings with a 50-dimensional embedding space.",
"html": null,
"type_str": "table"
},
"TABREF6": {
"num": null,
"content": "<table/>",
"text": "Comparison of different model architectures, using the Turian embeddings and a 5-word context window. A size of 0 means no additional hidden layer was used.",
"html": null,
"type_str": "table"
},
"TABREF7": {
"num": null,
"content": "<table><tr><td>Supertagger</td><td>CCGBank</td><td/><td colspan=\"2\">Wikipedia</td><td/><td>Bioinfer</td></tr><tr><td>C&amp;C Honnibal et al. (2009) Brown Clusters Turian-50 Embeddings Turian-50 + POS tags Turian-50 +</td><td colspan=\"6\">1 COV F1 (all) 85.30 81.19 99.0 96 97 98 99 100 COV F1 F1 (all) (cov) 85.47 99.6 F1 (cov) 80.64 76.08 97.2 2 3 F1 COV F1 (cov) (all) 74.88 4 85.19 99.8 -81.75 99.4 ----Oracle Category Accuracy (%) Turian-50 C&amp;C 85.27 99.9 85.21 80.89 100.0 80.89 76.06 100.0 76.06 86.11 100.0 86.11 82.30 100.0 82.30 78.41 99.8 78.28 85.62 99.9 85.55 81.77 100.0 81.77 77.05 100.0 77.05</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">Average Categories Per Word</td></tr><tr><td/><td/><td/><td/><td/><td>Wikipedia</td></tr><tr><td/><td/><td>Oracle Category Accuracy (%)</td><td>94 96 98 100</td><td>1</td><td>2</td><td>3 Turian-50 C&amp;C</td><td>4</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">Average Categories Per Word</td></tr><tr><td/><td/><td>Oracle Category Accuracy (%)</td><td>92 94 96 98 100</td><td>1</td><td>Biomedical 2 3</td><td>4 C&amp;C Turian-50</td><td>5</td></tr><tr><td colspan=\"2\">'s best performing hybrid model 4 (trained on Sections 02-</td><td/><td/><td colspan=\"3\">Average Categories Per Word</td></tr><tr><td colspan=\"2\">21), with automatic POS-tags, and the parameters</td><td colspan=\"5\">Figure 2: Ambiguity vs. Accuracy for different su-</td></tr><tr><td/><td/><td colspan=\"5\">pertaggers across different domains. Datapoints for</td></tr><tr><td/><td/><td colspan=\"5\">the C&amp;C parser use its standard back-off parameters.</td></tr></table>",
"text": "Frequent words 86.04 100.0 86.04 82.44 100.0 82.44 78.10 100.0 78.10",
"html": null,
"type_str": "table"
}
}
}
}