ACL-OCL / Base_JSON /prefixC /json /C08 /C08-1015.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C08-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:25:12.202460Z"
},
"title": "Learning Reliable Information for Dependency Parsing Adaptation",
"authors": [
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "ATR * Machine Translation Group",
"institution": "National Institute of Information and Communications Technology",
"location": {
"addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun",
"postCode": "619-0289",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "chenwl@nict.go.jp"
},
{
"first": "Youzheng",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "ATR * Machine Translation Group",
"institution": "National Institute of Information and Communications Technology",
"location": {
"addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun",
"postCode": "619-0289",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "youzheng.wu@nict.go.jp"
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": "",
"affiliation": {
"laboratory": "ATR * Machine Translation Group",
"institution": "National Institute of Information and Communications Technology",
"location": {
"addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun",
"postCode": "619-0289",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "isahara@nict.go.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we focus on the adaptation problem that has a large labeled data in the source domain and a large but unlabeled data in the target domain. Our aim is to learn reliable information from unlabeled target domain data for dependency parsing adaptation. Current state-of-the-art statistical parsers perform much better for shorter dependencies than for longer ones. Thus we propose an adaptation approach by learning reliable information on shorter dependencies in an unlabeled target data to help parse longer distance words. The unlabeled data is parsed by a dependency parser trained on labeled source domain data. The experimental results indicate that our proposed approach outperforms the baseline system, and is better than current state-of-the-art adaptation techniques.",
"pdf_parse": {
"paper_id": "C08-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we focus on the adaptation problem that has a large labeled data in the source domain and a large but unlabeled data in the target domain. Our aim is to learn reliable information from unlabeled target domain data for dependency parsing adaptation. Current state-of-the-art statistical parsers perform much better for shorter dependencies than for longer ones. Thus we propose an adaptation approach by learning reliable information on shorter dependencies in an unlabeled target data to help parse longer distance words. The unlabeled data is parsed by a dependency parser trained on labeled source domain data. The experimental results indicate that our proposed approach outperforms the baseline system, and is better than current state-of-the-art adaptation techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dependency parsing aims to build the dependency relations between words in a sentence. There are many supervised learning methods for training high-performance dependency parsers (Nivre et al., 2007) , if given sufficient labeled data. However, the performance of parsers declines when we are in the situation that a parser is trained in one \"source\" domain but is to parse the sentences in a second \"target\" domain. There are two tasks (Daum\u00e9 III, 2007) for the domain adaptation problem. The first one is that we have a large labeled data in the source domain and a small labeled data in target c 2008.",
"cite_spans": [
{
"start": 179,
"end": 199,
"text": "(Nivre et al., 2007)",
"ref_id": "BIBREF10"
},
{
"start": 437,
"end": 454,
"text": "(Daum\u00e9 III, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved. domain. The second is similar, but instead of having a small labeled target data, we have a large but unlabeled target data. In this paper, we focus on the latter one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Current statistical dependency parsers perform worse while the distance of two words is becoming longer for domain adaptation. An important characteristic of parsing adaptation is that the parsers perform much better for shorter dependencies than for longer ones (the score at length l is much higher than the scores at length> l ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose an approach by using the information on shorter dependencies in autoparsed target data to help parse longer distance words for adapting a parser. Compared with the adaptation methods of Sagae and Tsujii (2007) and Reichart and Rappoport (2007) , our approach uses the information on word pairs in auto-parsed data instead of using the whole sentences as newly labeled data for training new parsers. It is difficult to detect reliable parsed sentences, but we can find relative reliable parsed word pairs according to dependency length. The experimental results show that our approach significantly outperforms baseline system and current state of the art techniques.",
"cite_spans": [
{
"start": 212,
"end": 235,
"text": "Sagae and Tsujii (2007)",
"ref_id": "BIBREF13"
},
{
"start": 240,
"end": 269,
"text": "Reichart and Rappoport (2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In dependency parsing, we assign head-dependent relations between words in a sentence. A simple example is shown in Figure 1 , where the arc between a and hat indicates that hat is the head of a.",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 124,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Motivation and prior work",
"sec_num": "2"
},
{
"text": "Current statistical dependency parsers perform better if the dependency lengthes are shorter (Mc-Donald and Nivre, 2007) . Here the length of the dependency from word w i to word w j is simply equal to |i \u2212 j|. Figure 2: The scores relative to dependency length. \"SameDomain\" refers to training and testing in the same domain, and \"diffDomain\" refers to training and testing in two domains (domain adaptation). score) 1 on our testing data, provided by a deterministic parser, which is trained on labeled source data. Comparing two curves at the figure, we find that the scores of diffDomain decreases much more sharply than the scores of sameDomain, when dependency length increases. The score decreases from about 92% at length 1 to 50% at 7. When lengthes are larger than 7, the scores are below 50%. We also find that the score at length l is much higher (around 10%) than the score at length l + 1 from length 1 to 7. There is only one exception that the score at length 4 is a little less than the score at length 5. But this does not change so much and the scores at length 4 and 5 are much higher than the one at length 6. Two words (word w i and word w j ) having a dependency relation in one sentence can be adjacent words (word distance = 1), neighboring words (word distance = 2), or the words with distance > 2 in other sentences. Here the distance of word pair (word w i and word w j ) is equal to |i \u2212 j|. For example, \"a\" and \"hat\" has dependency relation in the sentence at Figure 1 . They can also be adjacent words in the sentence \"The boy saw a hat.\" and the words with distance = 3 in \"I see a red beautiful hat.\". This makes it possible for the word pairs with different distances to share the information.",
"cite_spans": [
{
"start": 93,
"end": 120,
"text": "(Mc-Donald and Nivre, 2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 1491,
"end": 1499,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Motivation and prior work",
"sec_num": "2"
},
{
"text": "According to the above observations, we present an idea that the information on shorter dependencies in auto-parsed target data is reliable for parsing the words with longer distance for domain adaptation. Here, \"shorter\" is not exactly short. That is to say, the information on dependency length l in auto-parsed data can be used to help parse the words whose distances are longer than l when testing, where l can be any number. We do not use the dependencies whose lengthes are too long because the accuracies of long dependencies are very low.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation and prior work",
"sec_num": "2"
},
{
"text": "In the following content, we demonstrate our idea with an example. The example shows how to use the information on length 1 to help parse two words whose distance is longer than 1. Similarly, the information on length l can also be used to help parse the words whose distance is longer than l. Figure 2 shows that the dependency parser performs best at tagging the relations between adjacent words. Thus, we expect that dependencies of adjacent words in auto-parsed target data can provide useful information for parsing words whose distances are longer than 1. We suppose that our task is Chinese dependency parsing adaptation.",
"cite_spans": [],
"ref_spans": [
{
"start": 294,
"end": 302,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation and prior work",
"sec_num": "2"
},
{
"text": "Here, we have two words \" JJ(large-scale)\" and \" NN(exhibition)\". Figure 3 shows the examples in which word distances of these two words are different. For the sentences in the bottom part, there is a ambiguity of \"JJ + NN1 + NN2\" at \" JJ(large-scale)/ NN(art)/ NN(exhibition)\", \" JJ(largescale)/ NN(culture)/ NN(art)/ NN(exhibition)\" and \" JJ(large-scale)/ NR(China)/ NN(culture)/ NN(art)/ NN(exhibition)\". Both NN1 and NN2 could be the head of JJ. In the examples in the upper part, \"",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation and prior work",
"sec_num": "2"
},
{
"text": "JJ(large-scale)\" and \" NN(exhibition)\" are adjacent words, for which current parsers can work well. We use a parser to parse the sentences in the upper part. \" (exhibition)\" is assigned as the head of \" (large-scale)\". Then we expect the information from the upper part can help parse the sentences in the bottom part. Now, we consider what a learning model could do to assign the appropriate relation between \" (large-scale)\" and \" (exhibition)\" in the bottom part. We provide additional information that \" (exhibition)\" is the possible head of \" (large-scale)\" in the auto-parsed data (the upper part). In this way, the learning model may use this information to make correct decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation and prior work",
"sec_num": "2"
},
{
"text": "A1)\u2026\u040f\u1b8d\u1e9c\u1a82\u1a65\u0944\u080a\u045a\u01c9\u03dc\u0f47\u0b82\u0822/\u0d5f \u0d5f \u0d5f \u0d5f/\u1229\u188c \u1229\u188c \u1229\u188c \u1229\u188c/\u01ca\u1177\u04f4\u186c)\u2026 A2)\u2026\u2138/\u0d5f \u0d5f \u0d5f \u0d5f/\u1229\u188c \u1229\u188c \u1229\u188c \u1229\u188c/\u03f5\u232d\u2233\u202b\u0fa8\u05c5\u202c\u1177\u04f4\u1fac \u2026 B1)\u2026\u0a95\u0739\u1ecd\u202b\u0759\u202c/\u0d5f \u0d5f \u0d5f \u0d5f/\u113e\u1d03/\u1229\u188c \u1229\u188c \u1229\u188c \u1229\u188c/\u02c8\u1235\u127b\u0f2a-\u1e95\u0468\u0915-\u2026 B3)\u2026\u0caf\u13b1\u09b8\u04b7\u1b5b\u08ea\u113e\u1d03\u01c3\u1c34\u059f/\u0d5f \u0d5f \u0d5f \u0d5f/\u0401/\u1b5b\u08ea/\u113e\u1d03/\u1229\u188c \u1229\u188c \u1229\u188c \u1229\u188c/\u01c4 B2)\u2026\u08eb\u0480\u0412\u1710\u0548\u13c8\u1843/\u0d5f \u0d5f \u0d5f \u0d5f/\u1b5b\u08ea/\u113e\u1d03/\u1229 \u1229 \u1229 \u1229\u188c \u188c \u188c \u188c/\u01c4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation and prior work",
"sec_num": "2"
},
{
"text": "Figure 3: Examples for \" (large-scale)\" and \" (exhibition)\". The upper part (A) refers to the sentences from unlabeled data and the bottom part (B) refers to the sentences waiting for parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation and prior work",
"sec_num": "2"
},
{
"text": "Up to now, we demonstrate how to use the information on length 1. Similarly, we can use the information on length 2, 3, . . . . By this way, we propose an approach by exploiting the information from a large-scale unlabeled target data for dependency parsing adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation and prior work",
"sec_num": "2"
},
{
"text": "In this paper, our approach is to use unlabeled data for parsing adaptation. There are several studies relevant to ours as described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation and prior work",
"sec_num": "2"
},
{
"text": "CoNLL 2007 (Nivre et al., 2007) organized a shared task for domain adaptation without annotated data in new domain. The labeled data was from the Wall Street Journal, the development data was from biomedical abstracts, and the testing data was from chemical abstracts and parent-child dialogues. Additionally, a large unlabeled corpus was provided. The systems by Sagae and Tsujii (2007) , Attardi et al. (2007) , and Dredze et al. (2007) performed top three in the shared task. Sagae and Tsujii (2007) presented a procedure similar to a single iteration of co-training. Firstly, they trained two parsers on labeled source data. Then the two parsers were used to parse the sentences in unlabeled data. They selected only identical parsing results produced by the two parsers. Finally, they retrained a parser on newly parsed sentences and the original labeled data. They performed the highest scores for this track. Attardi et al. (2007) presented a procedure with correcting errors by a revision techniques. Dredze et al. (2007) submitted parsing results without adaptation. They declared that it was difficult to significantly improve performance on any test domain beyond that of a state-of-the-art parser. Their error analysis suggested that the primary cause of loss from adaptation is because of differences in the annotation guidelines. Without specific knowledge of the target domain's annotation standards, significant improvement can not be made. Reichart and Rappoport (2007) studied selftraining method for domain adaptation (The WSJ data and the Brown data) of phrase-based parsers. McClosky et al. (2006) presented a successful instance of parsing with self-training by using a reranker. Both of them used the whole sentences as newly labeled data for adapting the parsers, while our approach uses the information on word pairs. Chen et al. (2008) presented an approach by using the information of adjacent words for indomain parsing. As Figure 2 shows, the score curves of sameDomain (in-domain) parsing and diffDomain (out-domain) parsing are quite different. Our work focuses on parsing adaptation and is based on the fact that current parsers perform much better for shorter dependencies than for longer ones. This causes that our work differs in that we use the information on shorter dependencies in auto-parsed target data to help parse the words with longer distance for parsing adaptation. In this paper, \"shorter\" and \"longer\" are relative. Length l is relatively shorter than length l + 1, where l can be any number.",
"cite_spans": [
{
"start": 11,
"end": 31,
"text": "(Nivre et al., 2007)",
"ref_id": "BIBREF10"
},
{
"start": 364,
"end": 387,
"text": "Sagae and Tsujii (2007)",
"ref_id": "BIBREF13"
},
{
"start": 390,
"end": 411,
"text": "Attardi et al. (2007)",
"ref_id": "BIBREF0"
},
{
"start": 418,
"end": 438,
"text": "Dredze et al. (2007)",
"ref_id": "BIBREF5"
},
{
"start": 479,
"end": 502,
"text": "Sagae and Tsujii (2007)",
"ref_id": "BIBREF13"
},
{
"start": 916,
"end": 937,
"text": "Attardi et al. (2007)",
"ref_id": "BIBREF0"
},
{
"start": 1009,
"end": 1029,
"text": "Dredze et al. (2007)",
"ref_id": "BIBREF5"
},
{
"start": 1457,
"end": 1486,
"text": "Reichart and Rappoport (2007)",
"ref_id": "BIBREF12"
},
{
"start": 1596,
"end": 1618,
"text": "McClosky et al. (2006)",
"ref_id": "BIBREF6"
},
{
"start": 1843,
"end": 1861,
"text": "Chen et al. (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1952,
"end": 1960,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation and prior work",
"sec_num": "2"
},
{
"text": "In this paper, we choose the model described by Nivre (2003) as our parsing model. It is a deterministic parser and works quite well in the sharedtask of CoNLL2006 (Nivre et al., 2006) .",
"cite_spans": [
{
"start": 48,
"end": 60,
"text": "Nivre (2003)",
"ref_id": "BIBREF11"
},
{
"start": 164,
"end": 184,
"text": "(Nivre et al., 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The parsing approach",
"sec_num": "3"
},
{
"text": "The Nivre (2003) model is a shift-reduce type algorithm, which uses a stack to store processed tokens and a queue to store remaining input tokens. It can perform dependency parsing in O(n) time. The dependency parsing tree is built from atomic actions in a left-to-right pass over the input. The parsing actions are defined by four operations: Shift, Reduce, Left-Arc, and Right-Arc, for the stack and the queue. TOP is the token on top of the stack and NEXT is next token in the queue. The Left-Arc and Right-Arc operations mean that there is a dependency relation between TOP and NEXT.",
"cite_spans": [
{
"start": 4,
"end": 16,
"text": "Nivre (2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The parsing model",
"sec_num": "3.1"
},
{
"text": "The model uses a classifier to produce a sequence of actions for a sentence. In this paper, we use the SVM model. And LIBSVM (Chang and Lin, 2001 ) is used in our experiments.",
"cite_spans": [
{
"start": 125,
"end": 145,
"text": "(Chang and Lin, 2001",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The parsing model",
"sec_num": "3.1"
},
{
"text": "Note that the approach (see section 4)we present in this paper can also be applied to other parsers, such as the parser by Yamada and Matsumoto (2003) , or the one by McDonald et al. (2006) .",
"cite_spans": [
{
"start": 123,
"end": 150,
"text": "Yamada and Matsumoto (2003)",
"ref_id": "BIBREF14"
},
{
"start": 167,
"end": 189,
"text": "McDonald et al. (2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The parsing model",
"sec_num": "3.1"
},
{
"text": "The parser is a history-based parsing model, which relies on features of the parsed tokens to predict next parsing action. We represent basic features based on words and part-of-speech (POS) tags. The basic features are listed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing with basic features",
"sec_num": "3.2"
},
{
"text": "\u2022 Lexical Features on TOP: the word of TOP, the word of the head of TOP, and the words of leftmost and rightmost dependent of TOP. \u2022 Lexical Features on NEXT: the word of NEXT and the word of the token immediately after NEXT in the original input string. \u2022 POS features on TOP: the POS of TOP, the POS of the token immediately below TOP, and the POS of leftmost and rightmost dependent of TOP. \u2022 POS features on NEXT: the POS of NEXT, the POS of next three tokens after NEXT, and the POS of the token immediately before NEXT in original input string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing with basic features",
"sec_num": "3.2"
},
{
"text": "Based on the above parsing model and basic features, we train a basic parser on annotated source data. In the following content, we call this parser Basic Parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing with basic features",
"sec_num": "3.2"
},
{
"text": "This section presents our adaptation approach by using the information based on relative shorter dependencies in auto-parsed data to help parse the words whose distances are longer. Firstly, we use the Basic Parser to parse all the sentences in unlabeled target data. Then we explore reliable information based on dependency relations in autoparsed data. Finally, we incorporate the features based on reliable information into the parser to improve performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain adaptation with shorter dependency",
"sec_num": "4"
},
{
"text": "In this section, we collect word pairs from the auto-parsed data. At first, we collect the word pairs with length 1. In a parsed sentence, if two words have dependency relation and their word distance is 1, we will add this word pair into the list L dep and count its frequency. We also consider the direction, LA for left arc and RA for right arc. For example, \" (large-scale)\" and \" (exhibition)\" are adjacent words in the sentence \" (We)/ (held)/ (large-scale)/ (exhibition)/ \" and have a left dependency arc assigned by the Basic Parser. The word pair \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting word pairs from auto-parsed data",
"sec_num": "4.1"
},
{
"text": "(large-scale)-(exhibition)\" with \"LA\" is added into L dep .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting word pairs from auto-parsed data",
"sec_num": "4.1"
},
{
"text": "Similarly, we collect the pairs whose word distances are longer than 1. In L dep , with length l and direction dr(LA or RA), the pair p u has f req l (p u : dr). For example, f req 2 (p u : LA) = 3 refers to the word pair p u with left arc(LA) occurs 3 times in the auto-parsed data when two words' distance is 2. Because figure 2 shows that the accuracies of long dependencies are low, we only collect the pairs whose distances are not larger than a predefined length l max .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting word pairs from auto-parsed data",
"sec_num": "4.1"
},
{
"text": "The word pair p t is the pair < w i , w j >.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The adaptation approach",
"sec_num": "4.2"
},
{
"text": "If the distance of p t is d, we will use the pairs whose lengthes are less than d. It results in the words with different distances using different set of word pairs in L dep . For example, if d is 5, we can use the pairs with dependency lengthes from 1 to 4 in L dep . The information is represented by the equation as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The information on shorter distances",
"sec_num": "4.2.1"
},
{
"text": "I d (p t : dr) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0 p t L dep f req 1 (p t : dr) d = 1 d\u22121 l=1 f req l (p t : dr) d > 1 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The information on shorter distances",
"sec_num": "4.2.1"
},
{
"text": "According to I d (p t : dr), word pairs are grouped into different buckets as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying into buckets",
"sec_num": "4.2.2"
},
{
"text": "Bucket d (p t : dr) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 B 0 I d (p t : dr) = 0 B 1 0 < I d (p t : dr) \u2264 f 1 . . . B n f n\u22121 < I d (p t : dr) \u2264 f n B a f n < I d (p t : dr) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying into buckets",
"sec_num": "4.2.2"
},
{
"text": "where, f 1 , f 2 , ..., f n are the thresholds. For example, I 3 ( -:LA) is 20, f 3 = 15 and f 4 = 25. Then it is grouped into the bucket B 4 . We set f 1 = 2, f 2 = 8, and f 3 = 15 in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying into buckets",
"sec_num": "4.2.2"
},
{
"text": "Based on the buckets of word pairs, we represent new features on labeled source data for the parser. We call these new features adapting features. According to different word distances between TOP and NEXT, the features are listed at Table 1 . So we have 8 types of the adapting features, including 2 types for distance=1, 3 types for distance=2, and 3 types for distance\u22653. Each feature is formatted as \"DistanceType:FeatureType:Bucket\", where DistanceType is D1, D2, or D3 corresponding to three distances, FeatureType is FB0, FB1, or FB 1 corresponding to three positions. Here, if a word pair has two dependency directions in L dep , we will choose the direction having higher frequency.",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing with the adapting Features",
"sec_num": "4.2.3"
},
{
"text": "Then using the parsing model of Nivre (2003) , we train a new parser based on the adapting features and basic features.",
"cite_spans": [
{
"start": 32,
"end": 44,
"text": "Nivre (2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing with the adapting Features",
"sec_num": "4.2.3"
},
{
"text": "distance FB 1 FB0 FB1 =1 + + =2 + + + \u22653 + + + Table 1 : Adapting features. FB0 refers to the bucket of the word pair of TOP and NEXT, FB1 refers to the bucket of the word pair of TOP and next token after NEXT, and FB 1 refers to the bucket of the word pair of TOP and the token immediately before NEXT. \"+\" refers to this item having this type of feature.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing with the adapting Features",
"sec_num": "4.2.3"
},
{
"text": "We show an example for representing the adapting features. For example, we have the string \" JJ(large-scale)/ NN(culture)/ NN(art)/ NN(exhibition)/ \". And \" (large-scale)\" is TOP and \" (exhibition)\" is NEXT. Because the distance of TOP and NEXT is 3, we have three features. We suppose that (FB0) the bucket of the word pair (\" -\") of TOP and NEXT is bucket B 4 , (FB1) the bucket of the word pair (\" -\") of TOP and next token after NEXT is bucket B 0 , and (FB 1) the bucket of the word pair (\" -\")of TOP and the token immediately before NEXT is bucket B 1 . Then, we have the features: \"D3:FB0:B 4 \", \"D3:FB1:B 0 \", and \"D3:FB 1:B 1 \".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example",
"sec_num": "4.2.4"
},
{
"text": "The unknown word problem is an important issue for domain adaptation (Dredze et al., 2007) . Our approach can work for improving performance of parsing unknown word pairs in which there is at least one unknown word. We collect word pairs including unknown word pairs at Section 4.1. Then unknown word pairs in testing data are also mapped into one of the buckets via Equation (2). So known word pairs can share the features with unknown word pairs.",
"cite_spans": [
{
"start": 69,
"end": 90,
"text": "(Dredze et al., 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation for unknown word 2",
"sec_num": "4.3"
},
{
"text": "CoNLL 2007 (Nivre et al., 2007) organized the domain adaptation task and provided a data set in English. However, the data set had differences between the annotation guidelines in source and target domains. Without specific knowledge of the target domain's annotation standards, significant improvement can not be made (Dredze et al., 2007) . In this paper, we discussed the situation that the data of source and target domains were annotated under the same annotation guideline. So we used a data set converted from Penn Chinese Treebank (CTB) 3 .",
"cite_spans": [
{
"start": 11,
"end": 31,
"text": "(Nivre et al., 2007)",
"ref_id": "BIBREF10"
},
{
"start": 319,
"end": 340,
"text": "(Dredze et al., 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "Labeled data: the CTB(V5.0) was used in our experiments. The data set was converted by the same rules for conversion as Chen et al. (2008) did. We used files 1-270, 400-554, and 600-931 as source domain training data (STrain), files 271-300 as source domain testing data (STest) and files 590-596 as target domain testing data (TTest). We used the gold standard segmentation and POS tags in the CTB. The target domain data was from Sinorama magazine, Taiwan and the source domain data was mainly from Xinhua newswire, mainland of China. The genres of these two parts were quite different. Table 2 shows the statistical information of the data sets. Given the words of the STrain data, TTest included 30.79% unknown words. We also checked the distribution of POS tags. The difference was large, too.",
"cite_spans": [
{
"start": 120,
"end": 138,
"text": "Chen et al. (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 589,
"end": 596,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "Unlabeled data: three data sets were used in our experiments, including the PFR data (5.44M words), the CKIP data (5.44M words), and the SINO data (25K words). The PFR corpus 4 included the documents from People's Daily at 1998 and we used about 1/3 of all sentences. The CKIP 5 corpus was used for SIGHAN word segmentation bakeoff 2005. To simplify, we used their segmentation. The SINO data was the files 1001-1151 of CTB, also from Sinorama magazine, the same as our testing target data. We removed the annotation tags from the SINO data. Among the three unlabeled data, the SINO data was closest to testing target data because they came from the same resource. Table 2 : The information of the data sets closer to source domain and the CKIP data was closer to target domain. To assign POS tags for the unlabeled data, we used the package TNT (Brants, 2000) to train a POS tagger on training data. Because the PFR data and the CTB used different POS standards, we did not use the POS tags in the PFR data.",
"cite_spans": [
{
"start": 846,
"end": 860,
"text": "(Brants, 2000)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 665,
"end": 672,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "We measured the quality of the parser by the unlabeled attachment score (UAS), i.e., the percentage of tokens with correct head. We also reported the accuracy of ROOT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "In the following content, OURS refers to our proposed approach. The baseline system refers to the Basic Parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In this section, we examined the performance of baseline systems and our proposed approach with different unlabeled data sets. Table 3 shows the experimental results, where \"OURS with SINO(GOLD)\" refers to the parser using gold standard POS tags, and \"OURS with SINO(AUTO)\" refers to the parser using autoassigned POS tags. From the two results of baseline, we found that the parser performed very differently in two domains by 8.24%.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Basic experiments",
"sec_num": "6.1"
},
{
"text": "With the help of SINO(AUTO), OURS provided 1.11% improvement for UAS and 6.16% for ROOT. If we used gold standard POS tags, the score was 78.40% for UAS (1.34% improvement), and 65.40% for ROOT (6.64% improvement). By using the SINO data, our approach achieved significant improvements over baseline system. It was surprised that OURS with CKIP achieved 78.30% score, just a little lower than the one with SINO(GOLD). The reason may be that the size of the CKIP data was much bigger than the SINO data. So we can obtain more word pairs from the CKIP data. The parser achieved 0.30% Table 4 : The effect of different l max improvement with PFR. Even though the size of the SINO data was smaller, the parser performed well with its help.",
"cite_spans": [],
"ref_spans": [
{
"start": 582,
"end": 589,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Basic experiments",
"sec_num": "6.1"
},
{
"text": "These results indicated that we should collect the unlabeled data that is closer to target domain or larger. The improvements of OURS with CKIP and OURS with SINO were significant in one-tail paired t-test (p < 10 \u22125 ). Table 4 shows the experimental results, where l max is described at Section 4.1. With SINO(GOLD), our parser performed best at l max = 7. And with SINO(AUTO), it performed best at l max = 5. These indicated that our approach can incorporate pairs with different lengthes to improve performance. We also found that the long dependencies were not reliable, as the curve (diffDomain) of Figure 2 showed that the scores were less than 50% when lengthes were larger than 8.",
"cite_spans": [],
"ref_spans": [
{
"start": 220,
"end": 227,
"text": "Table 4",
"ref_id": null
},
{
"start": 604,
"end": 612,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Basic experiments",
"sec_num": "6.1"
},
{
"text": "In this section, we turned to compare our approach with other methods. We implemented two systems: SelfTrain and CoTrain. The SelfTrain system was following to the method described by Reichart and Rappoport (2007) and randomly selected new auto-parsed sentences. The CoTrain system was similar to the learning scheme described by Sagae and Tsujii (2007) . However, we did not use the same parsing algorithms as the ones used by Sagae and Tsujii (2007) Table 5 : The results of several adaptation methods with CKIP trained a forward parser (same as our baseline system) and a backward parser. Then the identical parsed sentences by the two parsers were selected as newly labeled data. Finally, we retrained a forward parser with new training data. We selected the sentences having about 200k words from the CKIP data as newly labeled data for the SelfTrain and CoTrain systems. Table 5 shows the experimental results. Both systems provided about 0.4%-0.5% improvement over baseline system. Our approach performed best among all systems. Another problem was that the time for training the SelfTrain and CoTrain systems became much longer because they almost used double size of training data.",
"cite_spans": [
{
"start": 184,
"end": 213,
"text": "Reichart and Rappoport (2007)",
"ref_id": "BIBREF12"
},
{
"start": 330,
"end": 353,
"text": "Sagae and Tsujii (2007)",
"ref_id": "BIBREF13"
},
{
"start": 428,
"end": 451,
"text": "Sagae and Tsujii (2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 452,
"end": 459,
"text": "Table 5",
"ref_id": null
},
{
"start": 877,
"end": 884,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison of other systems",
"sec_num": "6.3"
},
{
"text": "In this section, we try to understand the benefit in our proposed adaptation methods. Here, we compare OURS's results with baseline's.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "7"
},
{
"text": "We presented an idea that using the information on shorter dependencies in auto-parsed target data to help parse the words with longer distance for domain adaptation. In this section, we investigated how our approach performed for parsing longer distance words. Figure 4 shows the improvement relative to dependency length. From the figure, we found that our approach always performed better than baseline when dependency lengthes were 1-7. Especially, our approach achieved improvements by 2.58% at length 3, 5.38% at 6, and 3.67% at 7. For longer ones, the improvement was not stable. One reason may be that the numbers of longer ones were small. Another reason was that parsing long distance words was very difficult. However, we still found that our approach did improve the performance for longer ones, by performing better at 8 points and worse at 5 points when length was not less than 8. The unknown word problem is an important issue for adaptation. Our approach can partially release the unknown word problem. We listed the data of the numbers of unknown words from 0 to 8 because the number of sentences was very small for others. We grouped each sentence into one of three classes: (Better) those where our approach's score increased relative to the baseline's score, (NoChange) those where the score remained the same, and (Worse) those where the score had a relative decrease. We added another class (NoWorse) by merging Better and NoChange. Figure 5 shows the experimental results, where x axis refers to the number of unknown words in one sentence and y axis refers to how many percent the class has. For example, for the sentences having 5 unknown words, about 45.45% improved, 22.73% became worse, 31.82% kept unchanged, and 77.27% did not become worse. The NoWorse curve showed that regardless of the number of unknown words in a sentence, there was more than 60% chance that our approach did not harm the result. The Better curve and Worse curve showed that our approach always provided better results. Our approach achieved most improvement for the middle ones. The reason was that parsing the sentence having too many unknown words was very difficult.",
"cite_spans": [],
"ref_spans": [
{
"start": 262,
"end": 270,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 1456,
"end": 1464,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Improvement relative to dependency length",
"sec_num": "7.1"
},
{
"text": "In this section, we listed the improvements relative to POS tags of paired words having a dependency relation. Table 6 shows the accuracies of baseline and OURS on TOP 20 POS pairs (ordered by the frequencies of their occurrences in testing data), where \"A1\" refers to the accuracy of baseline, \"A2\" refers to the accuracy of OURS, and \"Pairs\" is the POS pairs of dependent-head. : Improvement relative to POS pairs For example, \"NN-VV\" means that \"NN\" is the POS of the dependent and \"VV\" is the POS of the head. And baseline yielded 79.61% accuracy and OURS yielded 81.90% (2.29% higher) on \"NN-VV\". From the table, we found that our approach worked well for most POS pairs (better for eleven pairs, no change for six, and worse for three).",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Improvement relative to POS pairs",
"sec_num": "7.3"
},
{
"text": "This paper presents a simple but effective approach to adapt dependency parser by using unlabeled target data. We extract the information on shorter dependencies in an unlabeled data parsed by a basic parser to help parse longer distance words. The experimental results show that our approach significantly outperforms baseline system and current state of the art adaptation techniques. There are a lot of ways in which this research could be continued. First, we can apply our approach to other languages because our approach is independent on language. Second, we can enlarge the unlabeled data set to obtain more word pairs to provide more information for the parsers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "F 1 = 2 \u00d7 precision \u00d7 recall/(precision + recall)where precision is the percentage of predicted arcs of length d that are correct and recall is the percentage of gold standard arcs of length d that are correctly predicted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "An unknown word is a word that is not included in training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "More detailed information can be found at http://www.cis.upenn.edu/\u02dcchinese/.4 More detailed information can be found at http://www.icl.pku.edu.5 More detailed information can be found at http://rocling.iis.sinica.edu.tw/CKIP/index.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multilingual dependency parsing and domain adaptation using DeSR",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Attardi",
"suffix": ""
},
{
"first": "Felice",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Simi",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1112--1118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attardi, Giuseppe, Felice Dell'Orletta, Maria Simi, Atanas Chanev, and Massimiliano Ciaramita. 2007. Multilingual dependency parsing and domain adap- tation using DeSR. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 1112-1118.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "TnT-a statistical part-of-speech tagger",
"authors": [
{
"first": "T",
"middle": [],
"last": "Brants",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 6th Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "224--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brants, T. 2000. TnT-a statistical part-of-speech tag- ger. Proceedings of the 6th Conference on Applied Natural Language Processing, pages 224-231.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "LIBSVM: a library for support vector machines",
"authors": [
{
"first": "C",
"middle": [
"C"
],
"last": "Chang",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "Lin",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang, C.C. and C.J. Lin. 2001. LIBSVM: a library for support vector machines. Software available at http://www. csie. ntu. edu. tw/cjlin/libsvm.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Dependency parsing with short dependency relations in unlabeled data",
"authors": [
{
"first": "W",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, W., D. Kawahara, K. Uchimoto, Y. Zhang, and H. Isahara. 2008. Dependency parsing with short dependency relations in unlabeled data. In Proceed- ings of IJCNLP 2008.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Frustratingly easy domain adaptation",
"authors": [
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL 2007",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daum\u00e9 III, Hal. 2007. Frustratingly easy domain adap- tation. In Proceedings of ACL 2007, Prague, Czech Republic.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Frustratingly hard domain adaptation for dependency parsing",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Partha",
"middle": [
"Pratim"
],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Graca",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1051--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dredze, Mark, John Blitzer, Partha Pratim Taluk- dar, Kuzman Ganchev, Jo\u00e3o Graca, and Fernando Pereira. 2007. Frustratingly hard domain adap- tation for dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 1051-1055.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Reranking and self-training for parser adaptation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Coling-ACL",
"volume": "",
"issue": "",
"pages": "337--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McClosky, D., E. Charniak, and M. Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of Coling-ACL, pages 337-344.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Characterizing the errors of data-driven dependency parsing models",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "122--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McDonald, Ryan and Joakim Nivre. 2007. Character- izing the errors of data-driven dependency parsing models. In Proceedings of EMNLP-CoNLL, pages 122-131.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multilingual dependency analysis with a two-stage discriminative parser",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Lerman",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of CoNLL-X",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McDonald, Ryan, Kevin Lerman, and Fernando Pereira. 2006. Multilingual dependency analysis with a two-stage discriminative parser. In Proceed- ings of CoNLL-X, New York City, June.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Labeled pseudo-projective dependency parsing with support vector machines",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Eryigit",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Marinov",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nivre, J., J. Hall, J. Nilsson, G. Eryigit, and S Mari- nov. 2006. Labeled pseudo-projective dependency parsing with support vector machines. In CoNLL-X.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The CoNLL 2007 shared task on dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mc-Donald",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "915--932",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nivre, Joakim, Johan Hall, Sandra K\u00fcbler, Ryan Mc- Donald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on de- pendency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 915-932.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An efficient algorithm for projective dependency parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of IWPT2003",
"volume": "",
"issue": "",
"pages": "149--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nivre, J. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of IWPT2003, pages 149-160.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets",
"authors": [
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reichart, Roi and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In Proceedings of ACL, Prague, Czech Republic, June.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dependency parsing and domain adaptation with LR models and parser ensembles",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1044--1050",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sagae, Kenji and Jun'ichi Tsujii. 2007. Dependency parsing and domain adaptation with LR models and parser ensembles. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 1044-1050.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Statistical dependency analysis with support vector machines",
"authors": [
{
"first": "H",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of IWPT2003",
"volume": "",
"issue": "",
"pages": "195--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yamada, H. and Y. Matsumoto. 2003. Statistical de- pendency analysis with support vector machines. In Proceedings of IWPT2003, pages 195-206.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Figure 2shows the results(F 1 The boy saw a redhat .",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "An example for dependency relations.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Performance as a function of dependency length 7.2 Improvement relative to unknown words",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Performance as a function of number of unknown words",
"uris": null
},
"TABREF0": {
"html": null,
"num": null,
"text": "lists the information of data sets. From the table, we found that the PFR data was",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Num Of Words Unknown Word Rate</td></tr><tr><td>STrain</td><td>17983</td><td>-</td></tr><tr><td>STest</td><td>1829</td><td>9.73</td></tr><tr><td>TTest</td><td>1783</td><td>30.79</td></tr><tr><td>CKIP</td><td>140k</td><td>-</td></tr><tr><td>STest</td><td>1829</td><td>11.42</td></tr><tr><td>TTest</td><td>1783</td><td>8.63</td></tr><tr><td>PFR</td><td>123k</td><td>-</td></tr><tr><td>STest</td><td>1829</td><td>8.58</td></tr><tr><td>TTest</td><td>1783</td><td>15.64</td></tr></table>"
},
"TABREF2": {
"html": null,
"num": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td/><td>: Basic results</td></tr><tr><td colspan=\"3\">l max SINO(GOLD) SINO(AUTO)</td></tr><tr><td>1</td><td>77.84</td><td>77.80</td></tr><tr><td>3</td><td>78.03</td><td>77.95</td></tr><tr><td>5</td><td>78.22</td><td>78.17</td></tr><tr><td>7</td><td>78.40</td><td>78.11</td></tr><tr><td>9</td><td>78.38</td><td>78.13</td></tr><tr><td>\u221e</td><td>78.35</td><td>78.09</td></tr></table>"
},
"TABREF5": {
"html": null,
"num": null,
"text": "",
"type_str": "table",
"content": "<table/>"
}
}
}
}