| { |
| "paper_id": "R13-1046", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:56:22.742655Z" |
| }, |
| "title": "Towards Domain Adaptation for Parsing Web Data", |
| "authors": [ |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Khan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Indiana University Bloomington", |
| "location": { |
| "region": "IN", |
| "country": "USA" |
| } |
| }, |
| "email": "khanms@indiana.edu" |
| }, |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Dickinson", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Indiana University Bloomington", |
| "location": { |
| "region": "IN", |
| "country": "USA" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Indiana University Bloomington", |
| "location": { |
| "region": "IN", |
| "country": "USA" |
| } |
| }, |
| "email": "skuebler@indiana.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We improve upon a previous line of work for parsing web data, by exploring the impact of different decisions regarding the training data. First, we compare training on automatically POS-tagged data vs. gold POS data. Secondly, we compare the effect of training and testing within sub-genres, i.e., whether a close match of the genre is more important than training set size. Finally, we examine different ways to select out-of-domain parsed data to add to training, attempting to match the in-domain data in different shallow ways (sentence length, perplexity). In general, we find that approximating the in-domain data has a positive impact on parsing.", |
| "pdf_parse": { |
| "paper_id": "R13-1046", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We improve upon a previous line of work for parsing web data, by exploring the impact of different decisions regarding the training data. First, we compare training on automatically POS-tagged data vs. gold POS data. Secondly, we compare the effect of training and testing within sub-genres, i.e., whether a close match of the genre is more important than training set size. Finally, we examine different ways to select out-of-domain parsed data to add to training, attempting to match the in-domain data in different shallow ways (sentence length, perplexity). In general, we find that approximating the in-domain data has a positive impact on parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Parsing data from the web is notoriously difficult, as parsers are generally trained on news data (Petrov and McDonald, 2012) . The problem, however, varies greatly depending upon the particular piece of web data: what is often termed web data is generally a combination of different sub-genres, such as Facebook posts, Twitter feeds, YouTube comments, discussion forums, blogs, etc. The language used in such data does not follow standard conventions in various respects (see Herring, 2011) : 1) The data is edited to varying degrees, with Twitter on the lower end and professional emails and blog on the upper end of the scale.", |
| "cite_spans": [ |
| { |
| "start": 98, |
| "end": 125, |
| "text": "(Petrov and McDonald, 2012)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 477, |
| "end": 491, |
| "text": "Herring, 2011)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction and Motivation", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2) The sub-genres often display characteristics of spoken language, including sentence fragments and colloquialisms. 3) Some web data, especially social media data, typically contains a high number of emoticons and acronyms such as LOL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction and Motivation", |
| "sec_num": "1" |
| }, |
| { |
| "text": "At the same time, there is a clear need to develop basic NLP technology for a variety of types of web data. To perform tasks such as sentiment analysis (Nakagawa et al., 2010) or information extraction (McClosky et al., 2011) , it helps to partof-speech (POS) tag and parse the data, as a step towards providing a shallow semantic analysis.", |
| "cite_spans": [ |
| { |
| "start": 152, |
| "end": 175, |
| "text": "(Nakagawa et al., 2010)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 202, |
| "end": 225, |
| "text": "(McClosky et al., 2011)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction and Motivation", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We continue our work (Khan et al., 2013) on dependency parsing web data from the English Web Treebank (Bies et al., 2012) . We previously showed that text normalization has a beneficial effect on the quality of a parser on web data, that we can further improve the parser's accuracy by a simple, n-gram-based parse revision method, and that having a balanced training set of out-ofdomain and in-domain data provides the best results when parsing web data. The current work extends this previous work by more closely examining the data given as input for training the parser. Specifically, we take the following directions:", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 40, |
| "text": "(Khan et al., 2013)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 102, |
| "end": 121, |
| "text": "(Bies et al., 2012)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction and Motivation", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1. All previous experiments were carried out on gold part of speech (POS) tags. Here, we investigate using a POS tagger trained on outof-domain data, thus providing a more realistic setting for parsing web data. We specifically test the impact of training the parser on automatic POS tags (section 4).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction and Motivation", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Treebank (EWT) is divided into five different sub-genres: 1) answers to questions, 2) emails, 3) newsgroups, 4) reviews, and 5) weblogs. Figure 1 shows examples from the different sub-genres. So far, we used the whole set across these genres, which raises questions about whether a closer match of the genre is more important than the data size, and we thus investigate parsing results within each sub-genre, and whether adding easy-toparse data to training improves performance for the difficult sub-genres (section 5).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 137, |
| "end": 145, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The web data provided in the English Web", |
| "sec_num": "2." |
| }, |
| { |
| "text": "3. Finally, from our previous work, we know that combining the EWT training set with sentences from the Penn Treebank is beneficial. However, we do not know how to best select the out-of-domain sentences. Should they be drawn randomly; should they match in size; should the sentences match in terms of parsing difficulty (cf. perplexity)? We explore different ways to match the in-domain data (section 6).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The web data provided in the English Web", |
| "sec_num": "2." |
| }, |
| { |
| "text": "There is a growing body of work on parsing web data, as evidenced by the 2012 Shared Task on Parsing the Web (Petrov and McDonald, 2012) . There have been many techniques employed for improving parsing models, including normalizing the potentially ill-formed text (Foster, 2010; Gadde et al., 2011; \u00d8vrelid and Skjaerholt, 2012) and training parsers on unannotated or reannotated data, e.g., self-training or uptraining, (e.g., Seddah et al., 2012; Roux et al., 2012; Foster et al., 2011b,a) . Less work has gone into investigating the impact of different genres or on specific details of the sentences given to the parser.", |
| "cite_spans": [ |
| { |
| "start": 109, |
| "end": 136, |
| "text": "(Petrov and McDonald, 2012)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 264, |
| "end": 278, |
| "text": "(Foster, 2010;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 279, |
| "end": 298, |
| "text": "Gadde et al., 2011;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 299, |
| "end": 328, |
| "text": "\u00d8vrelid and Skjaerholt, 2012)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 428, |
| "end": 448, |
| "text": "Seddah et al., 2012;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 449, |
| "end": 467, |
| "text": "Roux et al., 2012;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 468, |
| "end": 491, |
| "text": "Foster et al., 2011b,a)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Indeed, Petrov and McDonald (2012) mention that for the shared task, \"[t]he goal was to build a single system that can robustly parse all domains, rather than to build several domain-specific systems.\" Thus, parsing results were not obtained by genre. However, Roux et al. (2012) demonstrated that using a genre classifier, in order to employ specific sub-grammars, helped improve parsing performance. Indeed, the quality and fit of data has been shown for in-domain parsing (e.g. Hwa, 2001) , as well as for other genres, such as questions (Dima and Hinrichs, 2011) .", |
| "cite_spans": [ |
| { |
| "start": 8, |
| "end": 34, |
| "text": "Petrov and McDonald (2012)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 481, |
| "end": 491, |
| "text": "Hwa, 2001)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 541, |
| "end": 566, |
| "text": "(Dima and Hinrichs, 2011)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "One common, well-documented ailment of web parsers is the effect of erroneous tags on POS accuracy. Foster et al. (2011a,b) , e.g., note that propagation of POS errors is a serious problem, especially for Twitter data. Researchers have thus worked on improving POS tagging for web data, whether by tagger voting (Zhang et al., 2012) or word clustering (Owoputi et al., 2012; Seddah et al., 2012) . There are no reports about the impact of the quality of POS tags for trainingi.e., whether worse, automatically-derived tags might be an improvement over gold tags-though S\u00f8gaard and Plank (2012) note that training with predicted POS tags improves performance.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 123, |
| "text": "Foster et al. (2011a,b)", |
| "ref_id": null |
| }, |
| { |
| "start": 312, |
| "end": 332, |
| "text": "(Zhang et al., 2012)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 352, |
| "end": 374, |
| "text": "(Owoputi et al., 2012;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 375, |
| "end": 395, |
| "text": "Seddah et al., 2012)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 569, |
| "end": 593, |
| "text": "S\u00f8gaard and Plank (2012)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Researchers have trained parsers using additional data which generally fits the testing domain, as mentioned above. There has been less work, however, on extracting specific types of sentences which fit the domain well. Bohnet et al. (2012) noticed a problem with parsing fragments and so extracted longer NPs to include in training as standalone sentences. From a different perspective, S\u00f8gaard and Plank (2012) weight sentences in the training data rather than selecting a subset, to better match the distribution of the target domain. In general, identifying sentences which are similar to a particular domain is a concept familiar in active learning (e.g., Mirroshandel and Nasr, 2011; Sassano and Kurohashi, 2010), where dissimilar sentences are selected for hand-annotation to improve parsing.", |
| "cite_spans": [ |
| { |
| "start": 220, |
| "end": 240, |
| "text": "Bohnet et al. (2012)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 388, |
| "end": 412, |
| "text": "S\u00f8gaard and Plank (2012)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3 Experimental Setup", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For our experiments, we use two main resources, the Wall Street Journal (WSJ) portion of the Penn Treebank (PTB) (Marcus et al., 1993) and the English Web Treebank (EWT) (Bies et al., 2012) . The EWT is comprised of approx. 16 000 sen-tences from weblogs, newsgroups, emails, reviews, and question-answers. Note that our data sets are different from the ones in Khan et al. (2013) since in the previous work we had removed sentences with POS labels AFX and GW.", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 134, |
| "text": "(Marcus et al., 1993)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 170, |
| "end": 189, |
| "text": "(Bies et al., 2012)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 362, |
| "end": 380, |
| "text": "Khan et al. (2013)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To create training and test sets, we broke the data into the following sets:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 WSJ training: sections 02-22 (42 009 sent.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 WSJ testing: section 23 (2 416 sent.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 EWT training: 80% of the data, taking the first four out of every five sentences (13 298 sent.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 EWT testing: 20% of the data, taking every fifth sentence (3 324 sent.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 EWT sub-genre training and test data: here, we create individual training and test sets for the 5 genres: EWT blog , EWT news , EWT email , EWT review , and EWT answer , using the same sampling described above", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The two corpora were converted from PTB constituency trees into dependency trees using the Stanford dependency converter (de Marneffe and Manning, 2008). 1 Since the EWT uses data that shows many of the characteristics of non-standard language, we decided to normalize the spelling of the EWT training and the test set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For the normalization, we reduce all web URLs to a single token, i.e., each web URL is replaced with the place-holder URL. Similarly, all emoticons are replaced by a single marker EMO. Repeated use of punctuation, e.g., !!!, is reduced to a single punctuation token.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We use TnT (Brants, 2000) , a Markov model POS tagger using a trigram model. It it is fast to train and has a state-of-the-art model for unknown words, using a suffix trie of hapax legomena.", |
| "cite_spans": [ |
| { |
| "start": 11, |
| "end": 25, |
| "text": "(Brants, 2000)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "POS Tagger", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We use MSTParser (McDonald and Pereira, 2006) , 2 a freely-available parser that reaches stateof-the-art accuracy in dependency parsing for English. MST is a graph-based parser which optimizes its parse tree globally (McDonald et al., 2005) , using a variety of feature sets, i.e., edge, ", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 45, |
| "text": "(McDonald and Pereira, 2006)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 217, |
| "end": 240, |
| "text": "(McDonald et al., 2005)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parser", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "For parser evaluation, we report unlabeled attachment scores (UAS) and labeled attachment scores (LAS), the percentage of dependencies which are attached correctly or attached and labeled correctly (K\u00fcbler et al., 2009) . Parser evaluation is carried out with MSTParser's evaluation module. For POS tagger evaluation, we report accuracy based on TnT's evaluation script. Significance testing was performed using the CoNLL 2007 shared task evaluation using Dan Bikel's Randomized Parsing Evaluation Comparator. 3", |
| "cite_spans": [ |
| { |
| "start": 198, |
| "end": 219, |
| "text": "(K\u00fcbler et al., 2009)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We here explore the effect of POS tagging on parsing web data, to see how closely the conditions for training should match the conditions for testing. However, first we need to gauge the effect of using the TnT POS tagger out of domain. For this reason, we conducted a set of experiments, training and testing TnT in different conditions. The results are shown in table 1. They show that TnT reaches an accuracy of 96.7% when trained and tested on the WSJ. This corroborates findings by Brants (2000) . When we train TnT on EWT training data, running it on the EWT testing data delivers an accuracy of 94.28%, already 2-3% below performance on news data. However, note that the EWT is much smaller than the full WSJ. In contrast, if we train TnT on WSJ and then use it for POS tagging EWT data, we only reach an accuracy of 88.73%. Even if we balance the source and target domain data, which proved beneficial in our previous experiments on parsing (Khan et al., 2013) , we reach an accuracy of 93.48%, well below the in-domain tagging result for the EWT. This means that in contrast to parsing, the POS tagger requires less training data and profits more from the small target domain training set than from a larger training set with out-of-domain data. Given this degree of error in tagging, a parser trained with similar noise in POS tags may outperform one which is trained on gold tags. Thus, we run TnT on the training data, using a 10-fold split of the training set: each tenth of the training corpus is tagged using a POS tagger trained on the other 9 folds. Then we use the combination of all the automatically POS tagged folds and insert those POS tags into the gold standard dependency trees before we train the parser.", |
| "cite_spans": [ |
| { |
| "start": 487, |
| "end": 500, |
| "text": "Brants (2000)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 949, |
| "end": 968, |
| "text": "(Khan et al., 2013)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of POS tagging", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The three conditions for POS tagging are shown in table 2. The first point to note is the impact of switching from gold to automatic POS tags: testing on TnT tags results in a degradation of about 4.5-5.5% in LAS, as compared to gold standard POS tags in the test set, consistent with typical drops in performance (e.g., Rehbein et al., 2012) .", |
| "cite_spans": [ |
| { |
| "start": 321, |
| "end": 342, |
| "text": "Rehbein et al., 2012)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of POS tagging", |
| "sec_num": "4" |
| }, |
| { |
| "text": "More to the point for our purposes, we see in table 2 that training a parser on automaticallyassigned POS tags outperforms a parser trained on gold POS tags. LAS increases from 77.69% to 78.54%. This supports the notion that training data should match testing closely. However, it also shows that we need to investigate methods for improving POS tagger accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of POS tagging", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As mentioned, the EWT contains subcorpora from five different genres, and, while they share many common features (misspellings, unknown words), they have many unique properties, as illustrated in the examples in figure 1. In terms of sentence length, domains such as weblogs lend themselves more easily to longer, more well-edited sentences, matching news data better. Reviews, on the other hand, often have shorter sentences-similar to, e.g., email greetings. Run-ons are common across genres, but we see them here in the answer and news sub-genres. The example for the answer sub-corpus shows some of the difficult challenges faced by a parser, as it contains a declarative sentence embedded within the question, where the final word (please) attaches back to the question.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of domain", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To gauge the effect of different sub-genres, we trained and tested the parser within each subgenre. In order to concentrate on the differences in parsing, we used gold POS tags for these experiments. Results for the five individual sub-corpora are given in the first five rows of table 3. It is noteworthy that there is nearly a 5% difference in LAS between the best sub-genre (EWT email ) and the worst (EWT answer ). We also show various properties of the sub-corpora, including number of tokens (Tokens), the average sentence length (Sen-Len), and the number of finite verbal roots (Fin-Root) 4 in training; and also the percentage of unknown word tokens in the test corpus, as compared to the training corpus (Unk.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of domain", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In general, emails and reviews fare the best, likely due to a combination of shorter sentences (11.84 and 14.58, respectively) and text that tends to follow grammatical conventions. Blogs and newsgroups are in the middle, with longer, harderto-parse sentences (18.17 and 22.07, respectively) and higher levels of unknown words in testing (12.2% and 10.2%), but being consistently fairly well-edited. While it might be surprising that the results for these two sub-genres are lower than emails and reviews, note that the training for both domains is significantly lower, on the order of 10,000 words less than the other corpora. It is possible that with more data, these well-edited domains would see improved parser performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of domain", |
| "sec_num": "5" |
| }, |
| { |
| "text": "On the lower end of the parsing spectrum is the domain of answers, which is a curious trend. There is nearly as much training data as with emails and reviews, and the average sentence length is comparable. If we look at the number of non-finite sentence roots-as a way to approximate the number of non-fragment sentences-it is nearly identical to the email sub-genre. We suspect that the fragments are not as systematic as greetings and that users may post replies quickly, leading to less well-formed text, but this deserves future consideration.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of domain", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Given the poor performance on the answer domain and the higher performance of the parser on 12 312 EWT answer 4.4% **84.01% **80.97% Table 3 : The effect of domain on parser performance, using gold POS tags (** = sig. at the 0.01 level, testing all conditions below the line, as compared to the first row Train=EWT answer ) emails, we decided to see whether parsing could be improved by adding data to the small answer training set 1) from the domain that is easiest to parse: emails, 2) from the news domain because of its similar average sentence length, and 3) from the blog domain because it has the longest sentences. We compare these configurations with one where we add the same number of sentences, but sampled from all four remaining domains (balanced) and one where we add all the training data from all other genres (rest). We see a clear improvement for all settings, in comparison with using only the answer data for training. The best results are obtained by using all other genres as additional training data, showing that the size of the training set is the most important variable. The results also show that the sampling from all remaining sub-genres results in higher parsing accuracy than just using the easiest to parse data set, illustrating that we should not look for data which is generally easy to parse, but data which is the best fit for the test data.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 140, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The effect of domain", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In our previous work (Khan et al., 2013) , we showed that we obtain the best results when we use a balanced training corpus with the same number of sentences from the EWT and the WSJ. On the one hand, these results show that in-domain data is critical for the success of the parser; on the other hand, out-of-domain data is important to increase the size of the training set. It is thus important to find a good balance between using more training data and not overpowering the indomain data. This leads to the question of whether it is possible to choose sentences from an out-of- Table 4 : The effect of selection on parser performance: all experiments on EWT testing data with gold POS tags; WSJ data defined in the text (*/** = sig. at the 0.05/0.01 level, testing the 4 perplexity models as compared to EWT+WSJSent) domain data set that are similar to the sentences in the target domain rather than just selecting a portion of consecutive sentences. In other words, can we identify sentences from the WSJ that will have the best impact on a parser for web data?", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 40, |
| "text": "(Khan et al., 2013)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 582, |
| "end": 589, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The effect of sentence selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In the first set of experiments, we investigate simple heuristics to choose a good set of training sentences from the WSJ: In the first experiment, we use the full WSJ (EWT+WSJ). Then we restrict the WSJ part to match the number of sentences from the EWT (EWT+WSJSent). However, since WSJ sentences are longer on average than EWT sentences, we repeat the experiment but choose the WSJ subset so that it matches the number of words in the EWT training set (EWT+WSJToken). Finally, we choose the WSJ sentences so that they match the distribution of sentence lengths in EWT (EWT+WSJDist). For example, if EWT has 100 sentences with 10 words, we select 100 sentences of length 10 from the WSJ. All of these experiments are again carried out with gold POS tags.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of sentence selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The results of these experiments are shown in the first two parts of table 4. The results for the selection methods show that selecting the WSJ part based on the number of words results in the lowest parsing accuracy. Choosing the WSJ part based on the number of sentences or the distribution of sentence length results in the same unlabeled accuracy (UAS) of 86.34%, as compared to 86.26% for the word based selection. However, the selection based on the number of sentences results in a higher labeled accuracy of 83.83%, as opposed to 83.73% for the distribution of sentence length. We suspect that the random selection of sentences gives more variety, which is beneficial for training. However, note that the difference in the number of words in the training set across these three methods is minimal: they vary only by 41 words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of sentence selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In a second set of experiments, we decided to use a more informed method for choosing similar sentences: perplexity. Thus, we trained a language model on the (stemmed) words of the test set based on a 5-gram word model, and then calculated perplexity for each sentence in the WSJ, normalized by the length of the sentence. We used the CMU-Cambridge Statistical Language Modeling Toolkit 5 for calculating perplexity. Perplexity should give an approximation of distance between sentences in the two corpora. We experimented with different selection strategies:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of sentence selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "1. Low Perplexity (LowP): We select the sentences with the lowest perplexity, i.e., the most similar ones to the test set; we restricted the number of sentences from the WSJ to match the size of the EWT training set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of sentence selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "2. All Low Perplexity (AllLowP): Here, we also selected sentences with low perplexity, but this time used all sentences below the median, i.e. half the WSJ sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of sentence selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "3. Low Perplexity close to the median (Med-LowP): Here, we investigate the effect of choosing sentences that are less similar to the test sentences: we select the same number of sentences as with LowP, but this time from the median down. In other words, the sentences with the lowest perplexity, i.e., the most similar sentences, are excluded. This is based on the assumption that if the chosen sentences are too similar, it will not have much effect on the trained model. 4. Mid-range Perplexity (MidP): In this set, we choose sentences that are even less similar to the test sentences. We again choose the same number of sentences as in the EWT training set, but half of them from the median and down and half from the median up.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of sentence selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The results are in the final four rows of table 4. Interestingly, the best-performing method adds low-perplexity data to training. Thus, selecting data which is more similar to the domain helps the most. Furthermore, once the data is farther away, it starts to harm parsing performance, as can be seen in the (albeit minimal) difference between the EWT+LowP and EWT+AllLowP models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The effect of sentence selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Exploring the parsing of web data, we have investigated different decisions that go into the training data, demonstrating how the better the fit of the training data to the testing data-in properties ranging from the nature of the POS tags to which sentences go into the data-the better performance the parser will have. We first compared training on automatically POS-tagged data vs. gold POS tag data, showing that performance improves by automatically tagging the training data. Next, we compared the effect of training and testing within sub-genres and saw that features such as sentence length have a strong effect. Finally, we examined ways to select out-of-domain parsed data to add to training, attempting to match the in-domain data in different shallow ways, and we found that matching training sentences to a language model improves parsing. In short, fitting the training data to the in-domain data, in even fairly superficial ways, has a positive impact on parsing results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary and Outlook", |
| "sec_num": "7" |
| }, |
| { |
| "text": "There are several directions to take this work. First, the sentence selection methods, for example, can be combined with self-training techniques to not only increase the training data size, but to only add sentences which fit the test domain well. Secondly, the work on understanding sub-genres of web parsing deserves more thorough treatment in the future to tease apart which components are most problematic (e.g., sentence fragments), how they can be automatically identified, and how the parser can be adjusted to accommodate them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary and Outlook", |
| "sec_num": "7" |
| }, |
| { |
| "text": "http://nextens.uvt.nl/depparse-wiki/SoftwarePage", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The Stanford converter treats the predicate as the head of copular sentences, e.g., a noun or adjective; thus, the number of finite roots does not correspond directly to the number of non-fragmentary sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.speech.cs.cmu.edu/SLM_info.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "English Web Treebank. Linguistic Data Consortium", |
| "authors": [ |
| { |
| "first": "Ann", |
| "middle": [], |
| "last": "Bies", |
| "suffix": "" |
| }, |
| { |
| "first": "Justin", |
| "middle": [], |
| "last": "Mott", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Warner", |
| "suffix": "" |
| }, |
| { |
| "first": "Seth", |
| "middle": [], |
| "last": "Kulick", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. 2012. English Web Treebank. Linguistic Data Con- sortium, Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "SANCL 2012 shared task: The IMS system description", |
| "authors": [ |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Farkas", |
| "suffix": "" |
| }, |
| { |
| "first": "Ozlem", |
| "middle": [], |
| "last": "Cetinoglu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Workshop on the Syntactic Analysis of Non-Canonical Language", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernd Bohnet, Richard Farkas, and Ozlem Cetinoglu. 2012. SANCL 2012 shared task: The IMS system description. In Workshop on the Syntactic Analysis of Non-Canonical Language (SANCL 2012). Mon- treal, Canada.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "TnT -a statistical part-ofspeech tagger", |
| "authors": [ |
| { |
| "first": "Thorsten", |
| "middle": [], |
| "last": "Brants", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the Sixth Applied Natural Language Processing Conference (ANLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "224--231", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thorsten Brants. 2000. TnT -a statistical part-of- speech tagger. In Proceedings of the Sixth Applied Natural Language Processing Conference (ANLP), pages 224-231. Seattle, WA.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The Stanford typed dependencies representation", |
| "authors": [ |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "COLING 2008 Workshop on Crossframework and Cross-domain Parser Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marie-Catherine de Marneffe and Christopher D. Man- ning. 2008. The Stanford typed dependencies rep- resentation. In COLING 2008 Workshop on Cross- framework and Cross-domain Parser Evaluation. Manchester, England.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A semiautomatic, iterative method for creating a domainspecific treebank", |
| "authors": [ |
| { |
| "first": "Corina", |
| "middle": [], |
| "last": "Dima", |
| "suffix": "" |
| }, |
| { |
| "first": "Erhard", |
| "middle": [], |
| "last": "Hinrichs", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "413--419", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Corina Dima and Erhard Hinrichs. 2011. A semi- automatic, iterative method for creating a domain- specific treebank. In Proceedings of the Interna- tional Conference Recent Advances in Natural Lan- guage Processing (RANLP), pages 413-419. Hissar, Bulgaria.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "cba to check the spelling\": Investigating parser performance on discussion forum posts", |
| "authors": [ |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of NAACL-HLT 2010", |
| "volume": "", |
| "issue": "", |
| "pages": "381--384", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jennifer Foster. 2010. \"cba to check the spelling\": In- vestigating parser performance on discussion forum posts. In Proceedings of NAACL-HLT 2010, pages 381-384. Los Angeles, CA.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "#hardtoparse: POS tagging and parsing the twitterverse", |
| "authors": [ |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| }, |
| { |
| "first": "Ozlem", |
| "middle": [], |
| "last": "Cetinoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Joachim", |
| "middle": [], |
| "last": "Wagner", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [ |
| "Le" |
| ], |
| "last": "Roux", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Hogan", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Deirdre", |
| "middle": [], |
| "last": "Hogan", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Van Genabith", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "The AAAI-11 Workshop on Analyzing Microtext", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jennifer Foster, Ozlem Cetinoglu, Joachim Wagner, Joseph Le Roux, Stephen Hogan, Joakim Nivre, Deirdre Hogan, and Josef van Genabith. 2011a. #hardtoparse: POS tagging and parsing the twitter- verse. In The AAAI-11 Workshop on Analyzing Mi- crotext. San Francisco.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "From news to comment: Resources and benchmarks for parsing the language of Web 2.0", |
| "authors": [ |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| }, |
| { |
| "first": "Ozlem", |
| "middle": [], |
| "last": "Cetinoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Joachim", |
| "middle": [], |
| "last": "Wagner", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [ |
| "Le" |
| ], |
| "last": "Roux", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Deirdre", |
| "middle": [], |
| "last": "Hogan", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Van Genabith", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of IJCNLP-11", |
| "volume": "", |
| "issue": "", |
| "pages": "893--901", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jennifer Foster, Ozlem Cetinoglu, Joachim Wagner, Joseph Le Roux, Joakim Nivre, Deirdre Hogan, and Josef van Genabith. 2011b. From news to comment: Resources and benchmarks for parsing the language of Web 2.0. In Proceedings of IJCNLP-11, pages 893-901. Chiang Mai, Thailand.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Adapting a WSJ trained part-ofspeech tagger to noisy text: Preliminary results", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [ |
| "V" |
| ], |
| "last": "Phani Gadde", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanveer", |
| "middle": [ |
| "A" |
| ], |
| "last": "Subramaniam", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Faruquie", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of Joint Workshop on Multilingual OCR and Analytics for Noisy Unstructured Text Data", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Phani Gadde, L. V. Subramaniam, and Tanveer A. Faruquie. 2011. Adapting a WSJ trained part-of- speech tagger to noisy text: Preliminary results. In Proceedings of Joint Workshop on Multilingual OCR and Analytics for Noisy Unstructured Text Data. Beijing, China.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Discourse in Web 2.0: Familiar, reconfigured, and emergent", |
| "authors": [ |
| { |
| "first": "Susan", |
| "middle": [], |
| "last": "Herring", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Georgetown University Round Table on Languages and Linguistics 2011: Discourse 2.0: Language and New Media", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Susan Herring. 2011. Discourse in Web 2.0: Familiar, reconfigured, and emergent. In Georgetown Uni- versity Round Table on Languages and Linguistics 2011: Discourse 2.0: Language and New Media. Washington, DC.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "On minimizing training corpus for parser acquisition", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Hwa", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the Workshop on Computational Natural Language Learning (CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca Hwa. 2001. On minimizing training corpus for parser acquisition. In Proceedings of the Work- shop on Computational Natural Language Learning (CoNLL). Toulouse, France.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Does size matter? Text and grammar revision for parsing social media data", |
| "authors": [ |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Khan", |
| "suffix": "" |
| }, |
| { |
| "first": "Markus", |
| "middle": [], |
| "last": "Dickinson", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the NAACL Workshop on Language Analysis in Social Media", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohammad Khan, Markus Dickinson, and Sandra K\u00fcbler. 2013. Does size matter? Text and grammar revision for parsing social media data. In Proceed- ings of the NAACL Workshop on Language Analysis in Social Media. Atlanta, GA.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Dependency Parsing", |
| "authors": [ |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sandra K\u00fcbler, Ryan McDonald, and Joakim Nivre. 2009. Dependency Parsing. Morgan & Claypool Publishers.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Building a large annotated corpus of English: The Penn Treebank", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [ |
| "Ann" |
| ], |
| "last": "Marcinkiewicz", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Event extraction as dependency parsing", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcclosky", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ACL-HLT-11", |
| "volume": "", |
| "issue": "", |
| "pages": "1626--1635", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David McClosky, Mihai Surdeanu, and Christopher Manning. 2011. Event extraction as dependency parsing. In Proceedings of ACL-HLT-11, pages 1626-1635. Portland, OR.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Online large-margin training of dependency parsers", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Koby", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL-05", |
| "volume": "", |
| "issue": "", |
| "pages": "91--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of de- pendency parsers. In Proceedings of ACL-05, pages 91-98. Ann Arbor, MI.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Active learning for dependency parsing using partially annotated sentences", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 12th International Conference on Parsing Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "140--149", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algo- rithms. In Proceedings of EACL-06. Trento, Italy. Seyed Abolghasem Mirroshandel and Alexis Nasr. 2011. Active learning for dependency parsing us- ing partially annotated sentences. In Proceedings of the 12th International Conference on Parsing Tech- nologies, pages 140-149. Dublin, Ireland.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Dependency tree-based sentiment classification using CRFs with hidden variables", |
| "authors": [ |
| { |
| "first": "Tetsuji", |
| "middle": [], |
| "last": "Nakagawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Kentaro", |
| "middle": [], |
| "last": "Inui", |
| "suffix": "" |
| }, |
| { |
| "first": "Sadao", |
| "middle": [], |
| "last": "Kurohashi", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of NAACL-HLT 2010", |
| "volume": "", |
| "issue": "", |
| "pages": "786--794", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classifica- tion using CRFs with hidden variables. In Proceed- ings of NAACL-HLT 2010, pages 786-794. Los An- geles, CA.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Lexical categories for improved parsing of web data", |
| "authors": [ |
| { |
| "first": "Lilja", |
| "middle": [], |
| "last": "\u00d8vrelid", |
| "suffix": "" |
| }, |
| { |
| "first": "Arne", |
| "middle": [], |
| "last": "Skjaerholt", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012)", |
| "volume": "", |
| "issue": "", |
| "pages": "903--912", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lilja \u00d8vrelid and Arne Skjaerholt. 2012. Lexical cate- gories for improved parsing of web data. In Proceed- ings of the 24th International Conference on Com- putational Linguistics (COLING 2012), pages 903- 912. Mumbai, India.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Improved part-of-speech tagging for online conversational text with word clusters", |
| "authors": [ |
| { |
| "first": "Olutobi", |
| "middle": [], |
| "last": "Owoputi", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Brendan", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Schneider", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of NAACL 2013", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2012. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of NAACL 2013. Atlanta, GA.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Overview of the 2012 shared task on parsing the web", |
| "authors": [ |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Workshop on the Syntactic Analysis of Non-Canonical Language", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Work- shop on the Syntactic Analysis of Non-Canonical Language (SANCL 2012). Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Better tags give better treesor do they?", |
| "authors": [ |
| { |
| "first": "Ines", |
| "middle": [], |
| "last": "Rehbein", |
| "suffix": "" |
| }, |
| { |
| "first": "Hagen", |
| "middle": [], |
| "last": "Hirschmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Anke", |
| "middle": [], |
| "last": "L\u00fcdeling", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Reznicek", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Linguistic Issues in Language Technology (LiLT)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ines Rehbein, Hagen Hirschmann, Anke L\u00fcdeling, and Marc Reznicek. 2012. Better tags give better trees - or do they? Linguistic Issues in Language Technol- ogy (LiLT), 7(10).", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "DCU-Paris13 systems for the SANCL 2012 shared task", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Le Roux", |
| "suffix": "" |
| }, |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| }, |
| { |
| "first": "Joachim", |
| "middle": [], |
| "last": "Wagner", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Workshop on the Syntactic Analysis of Non-Canonical Language", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph Le Roux, Jennifer Foster, Joachim Wagner, Ra- sul Samad Zadeh Kaljahi, and Anton Bryl. 2012. DCU-Paris13 systems for the SANCL 2012 shared task. In Workshop on the Syntactic Analysis of Non-Canonical Language (SANCL 2012). Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Using smaller constituents rather than sentences in active learning for Japanese dependency parsing", |
| "authors": [ |
| { |
| "first": "Manabu", |
| "middle": [], |
| "last": "Sassano", |
| "suffix": "" |
| }, |
| { |
| "first": "Sadao", |
| "middle": [], |
| "last": "Kurohashi", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "356--365", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manabu Sassano and Sadao Kurohashi. 2010. Using smaller constituents rather than sentences in active learning for Japanese dependency parsing. In Pro- ceedings of the 48th Annual Meeting of the Associa- tion for Computational Linguistics, pages 356-365. Uppsala, Sweden.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "The alpage architecture at the sancl 2012 shared task: Robust pre-processing and lexical bridging for user-generated content parsing", |
| "authors": [ |
| { |
| "first": "Djam\u00e9", |
| "middle": [], |
| "last": "Seddah", |
| "suffix": "" |
| }, |
| { |
| "first": "Benoit", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "Candito", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Workshop on the Syntactic Analysis of Non-Canonical Language", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Djam\u00e9 Seddah, Benoit Sagot, and Marie Candito. 2012. The alpage architecture at the sancl 2012 shared task: Robust pre-processing and lexical bridging for user-generated content parsing. In Workshop on the Syntactic Analysis of Non-Canonical Language (SANCL 2012). Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Parsing the web as covariate shift", |
| "authors": [ |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| }, |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Plank", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Workshop on the Syntactic Analysis of Non-Canonical Language", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anders S\u00f8gaard and Barbara Plank. 2012. Parsing the web as covariate shift. In Workshop on the Syntac- tic Analysis of Non-Canonical Language (SANCL 2012). Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Hit dependency parsing: Bootstrap aggregating heterogeneous parsers", |
| "authors": [ |
| { |
| "first": "Meishan", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wanxiang", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "Yijia", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhenghua", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Workshop on the Syntactic Analysis of Non-Canonical Language", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Meishan Zhang, Wanxiang Che, Yijia Liu, Zhenghua Li, and Ting Liu. 2012. Hit dependency pars- ing: Bootstrap aggregating heterogeneous parsers. In Workshop on the Syntactic Analysis of Non- Canonical Language (SANCL 2012). Montreal, Canada.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Example sentences from each sub-genre (<s> = sentence boundary)", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "text": "Answer: where can I get morcillas in tampa bay , I will like the argentinian type , but I will try anothers please ? 2. Email: Michael : <s> Thanks for putting the paperwork together . <s> I would have interest in meeting if you can present unique investment opportunities that I do n't have access to now .", |
| "html": null, |
| "content": "<table><tr><td>3. News: complete with original Magnavox tubes -</td></tr><tr><td>all tubes have been tested they are all good -stereo</td></tr><tr><td>amp</td></tr><tr><td>4. Review: Buyer Beware !! <s> Rusted out and</td></tr><tr><td>unsafe cars sold here !</td></tr><tr><td>5. Blog: The Supreme Court announced its ruling</td></tr><tr><td>today in Hamdan v. Rumsfeld divided along ide-</td></tr><tr><td>logical lines with John Roberts abstaining due to</td></tr><tr><td>his involvement at the D.C. Circuit level and An-</td></tr><tr><td>thony Kennedy joining the liberals in a 5 -3 deci-</td></tr><tr><td>sion that is 185 pages long .</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "text": "http://nlp.stanford.edu/software/ stanford-dependencies.shtml 2 http://sourceforge.net/projects/mstparser/", |
| "html": null, |
| "content": "<table><tr><td>Train</td><td>Test</td><td>POS acc.</td></tr><tr><td>WSJ</td><td>WSJ</td><td>96.73%</td></tr><tr><td>EWT</td><td>EWT</td><td>94.28%</td></tr><tr><td>WSJ</td><td>EWT</td><td>88.73%</td></tr><tr><td colspan=\"2\">WSJ+EWT (balanced) EWT</td><td>93.48%</td></tr><tr><td colspan=\"3\">Table 1: Results of using TnT in and out of domain</td></tr><tr><td colspan=\"3\">sibling, context, and non-local features, employ-</td></tr><tr><td colspan=\"3\">ing information from words and POS tags. We use</td></tr><tr><td colspan=\"3\">its default settings for all experiments.</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "text": "The effect of POS tagging on parser performance, using the base EWT data split (*=sig. at the 0.01 level, as compared to Train=Gold/ Test=TnT)", |
| "html": null, |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |