ACL-OCL / Base_JSON /prefixC /json /C16 /C16-1029.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C16-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:02:51.970929Z"
},
"title": "Consistent Word Segmentation, Part-of-Speech Tagging and Dependency Labelling Annotation for Chinese Language",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Shen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Inc",
"location": {
"region": "California",
"country": "USA"
}
},
"email": ""
},
{
"first": "Wingmui",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": {
"settlement": "Shatin, Hong Kong"
}
},
"email": ""
},
{
"first": "Hyunjeong",
"middle": [],
"last": "Choe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Google Inc",
"location": {
"region": "California",
"country": "USA"
}
},
"email": ""
},
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Japan Science and Technology Agency",
"location": {}
},
"email": "chu@pa.jst.jp"
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kyoto University",
"location": {
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kyoto University",
"location": {
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a new annotation approach to Chinese word segmentation, part-ofspeech (POS) tagging and dependency labelling that aims to overcome the two major issues in traditional morphology-based annotation: Inconsistency and data sparsity. We re-annotate the Penn Chinese Treebank 5.0 (CTB5) and demonstrate the advantages of this approach compared to the original CTB5 annotation through word segmentation, POS tagging and machine translation experiments.",
"pdf_parse": {
"paper_id": "C16-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a new annotation approach to Chinese word segmentation, part-ofspeech (POS) tagging and dependency labelling that aims to overcome the two major issues in traditional morphology-based annotation: Inconsistency and data sparsity. We re-annotate the Penn Chinese Treebank 5.0 (CTB5) and demonstrate the advantages of this approach compared to the original CTB5 annotation through word segmentation, POS tagging and machine translation experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The definition of \"word\" is an open problem in Chinese linguistics. In previous studies of Chinese corpus annotation (Duan et al., 2003; Huang et al., 1997; Xia, 2000) , the judgement of word-hood of a meaningful string is based on the analysis of morphology: A morpheme in Chinese is defined as the smallest combination of meaning and phonetic sound in Chinese language, which can be classified into two major types: 1). Free morphemes, which can either be words by themselves or form words with other morphemes; and 2). Bound morphemes, which can only form words by attaching to other morphemes.",
"cite_spans": [
{
"start": 117,
"end": 136,
"text": "(Duan et al., 2003;",
"ref_id": "BIBREF3"
},
{
"start": 137,
"end": 156,
"text": "Huang et al., 1997;",
"ref_id": "BIBREF4"
},
{
"start": 157,
"end": 167,
"text": "Xia, 2000)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An issue with word definition using morpheme classification is that, it potentially undermines the consistency of the representation of words. For example, \"\u8bba\" (theory) is a bound morpheme, therefore the string \"\u8fdb\u5316\u8bba\" (theory of evolution) is treated as a word; on the other hand the string \"\u8fdb\u5316 | \u7406\u8bba\" (theory of evolution) are treated as two words, despite the fact that the two strings have the same meaning and structure. In another example, \"\u8005\" (person) is considered as a bound morpheme, therefore \"\u53cd \u5bf9\u81ea\u7531\u8d38\u6613\u8005\" (people who are against free trade) is treated as one word, while the string without the bound morpheme, i.e. \"\u53cd\u5bf9 | \u81ea\u7531 | \u8d38\u6613\" (be against free trade), can only be treated as a phrase of three words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The morphology-based word definition can also make the data sparsity problem worse in corpus annotation. As an evidence, in the Penn Chinese Treebank 5.0 (CTB5) which is an annotated corpus widely used to train Chinese morphological analysis systems, we found that one of the major sources of the out-of-vocabulary (OOV) words is the compounds that end with a monosyllabic bound morpheme. For example, compounds \u5229\u7528\u7387 (utility rate) and \u6b21\u54c1\u7387 (rate of defective product) end with the bound morpheme \u7387 (rate); \u5b8c\u6210\u5ea6 (degree of completion) and \u6d3b\u8dc3\u5ea6 (degree of activity) end with the bound morpheme \u5ea6 (degree); \u6301\u7eed\u6027 (sustainability) and \u6325\u53d1\u6027 (property of volatile) end with the bound morpheme \u6027 (property). While these compounds are sparse in the corpus, the morphemes which they ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Re-annotation \u526f\u4e3b\u5e2d/NN (vice president) \u526f/JJ (vice) \u4e3b\u5e2d/NN (president) \u900f\u660e\u5ea6/NN (transparency) \u900f\u660e/JJ (transparent) \u5ea6/SFN (degree) \u975e\u751f\u4ea7\u6027/NN (unproductiveness) \u975e/JJ (none) \u751f\u4ea7/VV (produce) \u6027/SFN (property) \u4e2d\u592e\u96c6\u6743\u5f0f/JJ (politically centralized) \u4e2d\u592e/NN (center) \u96c6\u6743/NN (centralization) \u5f0f/SFA (type) Table 2 . Some examples of the word and POS annotation in the original CTB5 and our re-annotation. consist of can be frequently observed; this means these OOV words can be observed and learnt by a word segmenter if we split the morphemes as individual words in the annotation.",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 290,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "CTB5 Example",
"sec_num": null
},
{
"text": "In this paper, we propose a simple annotation approach for Chinese word segmentation that overcomes the two issues: inconsistency and data sparsity, which are found in the traditional morphologybased annotation approach. We further propose a tagset for part-of-speech tagging and a label set for dependency labelling, which are consistent with our word segmentation strategy and capture more Chinese-specific syntactic structures. We re-annotate the entire CTB5 using this approach, and through word segmentation, POS tagging and machine translation experiments we demonstrate the advantages of our annotation approach compared to the original approach adopted in CTB5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CTB5 Example",
"sec_num": null
},
{
"text": "The remainder of this paper is organized as follows: in section 2 we will describe our proposed annotation approach to word segmentation; in section 3 we will present a POS tagset which is consistent with our word segmentation strategy and a new dependency label set; in section 4 we will demonstrate the effectiveness of our approach compared to the original CTB5 through experiments; we will conclude our work in the last section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CTB5 Example",
"sec_num": null
},
{
"text": "We categorize the words in CTB5 into three categories: Common words, names, and idioms. For names and idioms, we keep them as individual words since their word boundaries are relatively easy to recognize and the consistency in manual annotation can be achieved with less efforts. We will mainly focus on describing the treatments of common words in this section. Table 3 . Proposed tagset for part-of-speech tagging. The underlined characters in the examples correspond to the tags on the left-most column. The CTB POS are also shown.",
"cite_spans": [],
"ref_spans": [
{
"start": 363,
"end": 370,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Segmentation Annotation",
"sec_num": "2"
},
{
"text": "The key in our method to define the boundaries of common words is the character-level POS pattern. Character-level POS has been introduced in previous studies (Zhang et al., 2013; Shen et al., 2014) which captures the grammatical roles of Chinese characters inside words; we further develop this idea and use it as a criterion in word definition.",
"cite_spans": [
{
"start": 159,
"end": 179,
"text": "(Zhang et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 180,
"end": 198,
"text": "Shen et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Segmentation Annotation",
"sec_num": "2"
},
{
"text": "We treat a meaningful disyllabic strings as a word if it falls into one of the character-level POS patterns listed in Table 1 . The reason we focus on disyllabic patterns instead of other polysyllabic ones is that, based on our observation, meaningful strings with 3 or more syllables (other than names and idioms) are always compounds in Chinese, and therefore can be segmented into a sequence of monosyllabic and disyllabic tokens based on their internal structures. On the other hand, the internal structure in a disyllabic token, though still exists, is more implicit and harder to describe with syntactical relations; we believe that it would increase the difficulties for subsequent tasks, such as dependency parsing, if we further segment these disyllabic strings.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Word Segmentation Annotation",
"sec_num": "2"
},
{
"text": "Following this strategy, a polysyllabic word can be then segmented based on its structure. This is illustrated with examples in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Segmentation Annotation",
"sec_num": "2"
},
{
"text": "To perform POS tagging re-annotation on CTB5 together with our proposed word segmentation approach, we use a POS tagset which is derived from the one used in the original CTB5 annotation. We show the tagset in Table 3 with comparison of number of occurrences of each tag in the original CTB5 and the re-annotated version, respectively. The tagset introduces several changes: First, we eliminate the use of the \"LC\" tag for locative words. This tag is assigned to all words that indicate locations and directions, such as \u4e0a (up), \u4e0b(down), \u5de6 (left), \u53f3 (right), \u5185 (inside), \u5916 (outside) etc.. We instead tag these words based on their real syntactic roles in sentences, such as \"NN\" (noun), \"AD\" (adverb) or \"VV\" (verb). Second, we add three new tags into the tagset for suffixes: \"SFN\" (nominal suffix), \"SFA\" (adjectival suffix), and \"SFV\" (verbal suffix). These tags are given to monosyllabic tokens appearing at the end of compounds, which are the bound morphemes in the traditional view. Based on our observation, these tokens have the ability to determine the syntactic role of the entire compound. For example, any compound that end with a nominal suffix \"\u5ea6\" (degree) always act as nouns in a sentence. It should be noted that because of this characteristic of suffixes, we can tag the children of suffixes in compounds based on their meaning but not their syntactic roles. We show some examples in Table 2 to illustrate our POS tagging strategy for compounds.",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 217,
"text": "Table 3",
"ref_id": null
},
{
"start": 1402,
"end": 1409,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Part-of-Speech and Dependency Label Set",
"sec_num": "3"
},
{
"text": "In Table 4 we present a dependency label set developed based on the Stanford Dependencies (De Marneffe et al., 2006) and its Chinese version (Chang et al., 2009) , which defines 45 dependency relations for Chinese sentences. This label set is also closely related to the Universal Dependency 1 with many of their labels compatible with each other. We explain the major characteristics of our label set in the following subsection.",
"cite_spans": [
{
"start": 90,
"end": 116,
"text": "(De Marneffe et al., 2006)",
"ref_id": "BIBREF11"
},
{
"start": 141,
"end": 161,
"text": "(Chang et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Part-of-Speech and Dependency Label Set",
"sec_num": "3"
},
{
"text": "The label \"dislocated\" is originally defined in the universal dependencies for languages such as Japanese to describe the syntactic relation of words in a topic-comment structure, but is not defined for Chinese. However, in Chinese it is frequent to see the topic-comment structure in a sentence, for example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "1. \u9019/this \u672c/-measure-\u66f8/book \u4ed6/he \u8cb7/buy \u7684/-particle-(This book, he bought it)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "In this sentence, \u8fd9\u672c\u4e66 (this book) is the topic and \u4ed6\u4e70\u7684 (he bought) is the comment. One common view of the syntactic structure of this sentence is that, \u4ed6 (he) is the subject of the predicate \u4ed6 (buy), and \u4e66 (book) is the direct object. This treatment sees a topic-comment structure as having an OSV (object-subject-verb) word order, which is acceptable; it however has some problems in certain cases, for example: 2. \u9019/this \u672c/-measure-\u66f8/book \u4ed6/he \u8cb7/buy \u7684/-particle-\u6628\u5929/yesterday \u4e0d\u898b/disappear \u4e86/particle-(This book that he bought disappeared yesterday)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "In this sentence, \u4e66 (book) is still the direct object of \u4e70 (buy), while it is also the subject of \u4e0d\u89c1 (disappear). Because of the nature of the dependency grammar we adopted, for such a structure we would have to choose one relation for \u4e66 (book), either \"nsubj\" or \"dobj\", and discard the other relation which would cause a loss of the syntactic information encoded in the parse tree. Moreover, the OSV word order cannot explain all topic-comment structure such as the following example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "3. \u9019/this \u5834/-measure-\u706b/fire \u5e78\u8667/fortunately \u6d88\u9632/firefighting \u968a/team \u4f86/come \u5f97/-particle-\u65e9 /early (This fire, fortunately the firefighters came in time) xcomp(\u559c\u6b61 like, \u6253 play) Table 4 . Proposed dependency label set.",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "Unlike in the other two examples, the topic here, \u9019\u5834\u706b (this fire), is not the direct object of the verb in the comment, \u5e78\u8667\u6d88\u9632\u968a\u4f86\u5f97\u65e9 (fortunately the firefighters came in time).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "To overcome these difficulties, we employ a different view which treats the topic-comment structure as having double subjects in a SSV word order. We define the first subject, \u8fd9\u672c\u4e66 (this book) in example 2, as the head in a \"dislocated\" relation, and the subject-verb phrase, \u4ed6\u4e70\u7684 (he bought) in example 2, as the modifier. The head in this dislocated relation can then form a \"nsubj\" (nominal subject) relation with the main predicate of the sentence, \u4e0d\u89c1 (disappear). Similarly, in example 3, the topic and the comment still form a dislocated relation even though the topic is not a direct object of the verb in the comment. prt and prep We define the \"prt\" relation in two ways: i. A relation between a verb and a particle. For example, \u60f3\u50cf (imagine) is the head in a \"prt\" relation of \u6240 (particle) in the sentence \u9019\u662f\u4ed6\u5011\u6240\u4e0d\u80fd\u60f3\u50cf\u7684 (this is what they can't imagine). ii. A relation between a verb and its succeeding complement. For example, \u6253\u6383 (clean) is the head in a \"prt\" relation of \u5b8c (finish) in the sentence \u623f\u9593\u6253\u6383\u5b8c\u4e86 (the room has been cleaned).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "We use the \"prt\" relation in the second case to capture the predicate-complement structure in Chinese. The verb \u5b8c (finish) in the second example above functions to complement the meaning of the main verb, \u6253\u6383 (clean), and the sentence is still grammatical when the complement verb is removed: \u623f\u9593 \u6253\u6383\u4e86 (the room is cleaned).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "The complement verb sometimes also functions as a coverb in a serial verb construction, which takes its own direct object. For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "4. \u628a/-auxiliary-\u6578\u64da/data \u6574\u7406/summarize \u6210/become \u5831\u544a/report (summarize the data into a report)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "Here the two verbs \u6574\u7406 (summarize) and \u6210(become) form a \"prt\" relation, while they are the heads of \u6578\u64da (data) and \u5831\u544a (report) in the \"dobj\" relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "A difficulty with labeling \"prt\" is that, it can be easily confused with the \"prep\" (prepositional modifier) relation. For example, one can argue that \u6210 (become) is a preposition instead of a verb and should be tagged as IN, so that the relation between \u6574\u7406 (summarize) and \u6210 (become) would be \"prep\". To overcome this ambiguity, we apply a simple test: If the phrase headed by the word with a VV vs. IN ambiguity can be moved to a position before the main verb, then this word is a preposition and a prepositional modifier of the main verb; otherwise it is a verb. Here since the phrase \"\u6210 \u5831\u544a\" (into report) cannot be moved to the position before \u6574\u7406 (summarize), it should in fact be a verb phrase, not a prepositional phrase. suff We define the suffix relation in a compound which has a \"stem-suffix\" structure. The suffix word with a POS tag SFN, SFA, or SFV is the root of the subtree formed by the words in the compound. It has one and only one child in this subtree, which is the head of the \"stem\", and the dependency relation between them is labelled as \"suff\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "The motivation of employing the \"suff\" label is to relieve the data sparseness problem of word forms in annotated corpora. Compounds, especially those with a \"stem-suffix\" structure, is a major source of new words in Chinese language. These compounds, however, often share a set of suffix words which has a limited amount of instances. We think it is more effective for a parser to learn from features with word forms by treating the suffix words as the heads of compounds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Specific Labels dislocated",
"sec_num": "3.1"
},
{
"text": "We re-annotated the entire CTB5 with our proposed word segmentation and POS tagging annotation strategies. We further re-annotated 3,000 sentences which are randomly sampled from the training set of CTB5 using our proposed dependency label set. This re-annotated set is compared with the same sentences with the original annotation in a machines translation experiment in section 4.3. Table 6 . Experimental results for morphological analysis on CTB5.",
"cite_spans": [],
"ref_spans": [
{
"start": 385,
"end": 392,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Re-annotated Corpus",
"sec_num": "4.1"
},
{
"text": "To evaluate the consistency of our annotation, 4 trained annotators were divided into two equal groups to perform 2-way annotation on a small subset (first 100 sentences in files 301-325), and each pair of annotators were assigned with 50 sentences. The inter-annotator agreement is 99.10% for segmentation, 98.37% for POS tagging, and 95.62% for dependency labeling. Table 5 shows some of the statistics of the original and the re-annotated CTB5. We split CTB5 in the same data division as in previous studies (Jiang et al., 2008a; Jiang et al., 2008b; Kruengkrai et al., 2009; Zhang and Clark, 2010; Sun, 2011) . The training, development and test set have 18,089, 350 and 348 sentences, respectively. Compared to the original CTB5, the re-annotated training set has a lower percentage of unknown words and unknown word-POS pairs found in the corresponding test set. This is consistent with our observation that compounds with internal structures are one of the major sources of OOV words.",
"cite_spans": [
{
"start": 511,
"end": 532,
"text": "(Jiang et al., 2008a;",
"ref_id": "BIBREF5"
},
{
"start": 533,
"end": 553,
"text": "Jiang et al., 2008b;",
"ref_id": "BIBREF6"
},
{
"start": 554,
"end": 578,
"text": "Kruengkrai et al., 2009;",
"ref_id": "BIBREF9"
},
{
"start": 579,
"end": 601,
"text": "Zhang and Clark, 2010;",
"ref_id": "BIBREF19"
},
{
"start": 602,
"end": 612,
"text": "Sun, 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 368,
"end": 375,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Re-annotated Corpus",
"sec_num": "4.1"
},
{
"text": "We compared the performance of a state-of-the-art joint word segmentation and part-of-speech tagging system (Kruengkrai et al., 2009) on the original and our re-annotated CTB5. We used the position-ofcharacter (POC) tagset and the baseline feature set described in (Shen et al., 2014) . We trained all models using the averaged perceptron (Collins, 2002) , which is an efficient and stable online learning algorithm. The models applied on all test sets are those that result in the best performance on the dev sets. To learn the characteristics of unknown words, we built the system's lexicon using only the words in the training data that appear at least 2 times.",
"cite_spans": [
{
"start": 108,
"end": 133,
"text": "(Kruengkrai et al., 2009)",
"ref_id": "BIBREF9"
},
{
"start": 265,
"end": 284,
"text": "(Shen et al., 2014)",
"ref_id": "BIBREF15"
},
{
"start": 339,
"end": 354,
"text": "(Collins, 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis Experiments",
"sec_num": "4.2"
},
{
"text": "We use precision, recall and the F-score to measure the performance of the systems. Precision (P) is defined as the percentage of output tokens that are consistent with the gold standard test data, and recall (R) is the percentage of tokens in the gold standard test data that are recognized in the output. The balanced F-score (F) is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis Experiments",
"sec_num": "4.2"
},
{
"text": "2\u2022P\u2022R P+R .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis Experiments",
"sec_num": "4.2"
},
{
"text": "We compared the performance of the morphological analyzer on the original and the re-annotated CTB5. The results of the word segmentation experiment and the joint experiment of segmentation and POS tagging are shown in Table 6(a) and Table 6(b), respectively. Each row in these tables shows the performance of the same system trained on the corresponding corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis Experiments",
"sec_num": "4.2"
},
{
"text": "For \"Re-annotated-partial\" in Table 6 (a), we applied a different setting in order to directly compare the annotation consistency and data sparsity between the two corpora: We used the training set from the re-annotated corpus to train the system but the test set from the original corpus in the evaluation. To make the evaluation meaningful, we added an extra criterion when calculating the precision and the Table 8 . Experimental results for Chinese-Japanese machine translation on ASPEC corpus using KyotoEBMT system.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 6",
"ref_id": null
},
{
"start": 410,
"end": 417,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Morphological Analysis Experiments",
"sec_num": "4.2"
},
{
"text": "recall: If the outmost boundaries of a sequence (two or more) of output tokens are consistent with a token in the test set, we consider that the output correctly identifies this token in the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis Experiments",
"sec_num": "4.2"
},
{
"text": "The results show that, the morphological analyzer can obtain higher accuracies in both word segmentation (0.48 points absolute in F-score) and joint (0.86 points absolute in F-score) experiments. Furthermore, in the word segmentation experiment \"Re-annotated-partial\" where we mapped the output of the system which is trained using the re-annotated training data to the original CTB5 test set, the accuracy is significantly higher 2 than that of the \"Original\", which demonstrates the better consistency in our reannotation corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological Analysis Experiments",
"sec_num": "4.2"
},
{
"text": "To show that a morphological analysis system and a dependency parsing system can both benefit from our re-annotation, we conducted two sets of Chinese-to-Japanese machine translation experiments where a morphological analyzer and a dependency parser are used respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Experiments",
"sec_num": "4.3"
},
{
"text": "The parallel corpus we used is the Chinese-Japanese part of the Asian Scientific Paper Excerpt Corpus (ASPEC) 3 , containing 672k sentence pairs. We used 2,090 and 2,107 additional sentence pairs for tuning and testing, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Experiments",
"sec_num": "4.3"
},
{
"text": "In the first set of experiments, we segmented the Japanese sentences using JUMAN (Kurohashi et al., 1994) , and the Chinese sentences using the same morphological analyzer described in the last subsection. For decoding, we used the state-of-the-art phrase based statistical machine translation toolkit Moses (Koehn et al., 2007) with default options. We trained the 5-gram language models on the target side of the parallel corpora using the SRILM toolkit 4 with interpolated Kneser-Ney discounting. Tuning was performed by minimum error rate training (MERT) (Och, 2003) , and it was re-run for every experiment.",
"cite_spans": [
{
"start": 81,
"end": 105,
"text": "(Kurohashi et al., 1994)",
"ref_id": "BIBREF10"
},
{
"start": 308,
"end": 328,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF8"
},
{
"start": 559,
"end": 570,
"text": "(Och, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Experiments",
"sec_num": "4.3"
},
{
"text": "In the second set of experiments, we used the same morphological analyzers to segment and tag the POS of Japanese and Chinese sentences as in the first set. We further parsed the dependency structures of the Japanese sentences using KNP (Kawahara and Kurohashi, 2006) , a lexicalized probabilistic dependency parser, and for the Chinese sentences we used a second-order graph-based parser proposed in (Shen et al., 2012) . For decoding, we used the tree-to-tree example-based machine translation framework KyotoEBMT 5 (Richardson et al., 2015) with default options.",
"cite_spans": [
{
"start": 237,
"end": 267,
"text": "(Kawahara and Kurohashi, 2006)",
"ref_id": "BIBREF2"
},
{
"start": 401,
"end": 420,
"text": "(Shen et al., 2012)",
"ref_id": "BIBREF14"
},
{
"start": 518,
"end": 543,
"text": "(Richardson et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Experiments",
"sec_num": "4.3"
},
{
"text": "We report results on the test set using BLEU-4 score, which was evaluated using the multi-bleu.perl script in Moses based on Juman segmentations. The significance test was performed using the bootstrap resampling method proposed by Koehn (2004) .",
"cite_spans": [
{
"start": 232,
"end": 244,
"text": "Koehn (2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Experiments",
"sec_num": "4.3"
},
{
"text": "In Table 7 we compare the performance of three Moses models: In \"Character\" we used a simple segmentation strategy for the Chinese sentences where we treated each character as a token; in \"Original\" and \"Re-annotated\" we segmented the Chinese sentences using the corresponding models described in 2 < 0.05 in McNemar's test. 3 http://lotus.kuee.kyoto-u.ac.jp/ASPEC/ 4 http://www.speech.sri.com/projects/srilm 5 http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?KyotoEBMT the last subsection. The results show, with the underlying machine translation system being the same, the segmenter trained with the original CTB5 failed to support the system to outperform the simple character-based segmentation, while on the other hand the system using the segmenter trained with our re-annotated CTB5 significantly outperformed both \"Character\" 6 and \"Original\" 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Machine Translation Experiments",
"sec_num": "4.3"
},
{
"text": "In Table 8 we show the result of the experiment with KyotoEBMT, a tree-to-tree machine translation system which requires unlabeled dependency annotation in the model training. 3,000 sentences with original and re-annotated dependency labels were used for training the parsers in \"original\" and \"reannotated\" settings, respectively. The result shows that, the model \"Re-annotated\" which used the training set with the proposed annotation, it significantly outperformed 8 the baseline model \"Original\" by 0.97 point in BLEU-4 score.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Machine Translation Experiments",
"sec_num": "4.3"
},
{
"text": "We have proposed a new annotation approach for Chinese word segmentation, part-of-speech tagging, and dependency labelling. By re-annotating the CTB5 and conducting word segmentation, POS tagging and machine translation experiments, we have demonstrated that this approach has the advantages in achieving higher annotation consistency as well as less data sparsity, compared to the original annotation of CTB5. We couldn't show the comparison in dependency parsing experiments as we currently have only 3,000 annotated sentences; this experiment will be included in our future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://universaldependencies.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Discriminative Reordering with Chinese Grammatical Relations Features",
"authors": [
{
"first": "Pi-Chuan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Huihsin",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "51--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pi-Chuan Chang, Huihsin Tseng, Dan Jurafsky, and Christopher D. Manning. 2009. Discriminative Reordering with Chinese Grammatical Relations Features. In Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation, pages 51-59.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. In Proceedings of EMNLP, pages 1-8.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Fully-Lexicalized Probabilistic Model for Japanese Syntactic and Case Structure Analysis",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "176--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daisuke Kawahara and Sadao Kurohashi. 2006. A Fully-Lexicalized Probabilistic Model for Japanese Syntactic and Case Structure Analysis. In Proceedings of the Human Language Technology Conference of the NAACL, pages 176-183.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Chinese word segmentation at Peking University",
"authors": [
{
"first": "Huiming",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Xiaojing",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Shiwen",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the second SIGHAN workshop on Chinese language processing",
"volume": "",
"issue": "",
"pages": "152--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "HuiMing Duan, XiaoJing Bai, BaoBao Chan, and ShiWen Yu. 2003. Chinese word segmentation at Peking Uni- versity. In Proceedings of the second SIGHAN workshop on Chinese language processing, pages 152-155.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Segmentation Standard for Chinese Natural Language Processing. Computational Linguistics and Chinese Language Processing",
"authors": [
{
"first": "Churen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kehjiann",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fengyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "2",
"issue": "",
"pages": "47--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ChuRen Huang, KehJiann Chen, FengYi Chen, and LiLi Chang. 1997. Segmentation Standard for Chinese Natural Language Processing. Computational Linguistics and Chinese Language Processing vol. 2, no. 2, August 1997, pages 47-62.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Cascaded Linear Model for Joint Chinese Word Segmentation and Part-of-speech Tagging",
"authors": [
{
"first": "Wenbin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "L\u00fc",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan L\u00fc. 2008a. A Cascaded Linear Model for Joint Chinese Word Segmentation and Part-of-speech Tagging. In Proceedings of ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Word Lattice Reranking for Chinese Word Segmentation and Partof-speech Tagging",
"authors": [
{
"first": "Wenbin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenbin Jiang, Haitao Mi, and Qun Liu. 2008b. Word Lattice Reranking for Chinese Word Segmentation and Part- of-speech Tagging. In Proceedings of COLING.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Statistical Significance Tests for Machine Translation Evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP 2004",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of EMNLP 2004, pages 388-395.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Moses: Open Source Toolkit for Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion, Demo and Poster Session",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th An- nual Meeting of the Association for Computational Linguistics Companion, Demo and Poster Session, pages 177-180.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An Error-Driven Word-Character Hybird Model for Joint Chinese Word Segmentation and POS Tagging",
"authors": [
{
"first": "Canasai",
"middle": [],
"last": "Kruengkrai",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Yiouwang",
"middle": [],
"last": "Kazama",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "513--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Canasai Kruengkrai, Kiyotaka Uchimoto, Jun'ichi Kazama, YiouWang, Kentaro Torisawa, and Hitoshi Isahara. 2009. An Error-Driven Word-Character Hybird Model for Joint Chinese Word Segmentation and POS Tagging. In Proceedings of ACL-IJCNLP, pages 513-521.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improvements of Japanese Morphological Analyzer JUMAN",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Toshihisa",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "Nagao",
"middle": [],
"last": "Makoto",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the International Workshop on Sharable Natural Language",
"volume": "",
"issue": "",
"pages": "22--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi, Toshihisa Nakamura, Yuji Matsumoto, and Nagao Makoto. 1994. Improvements of Japanese Morphological Analyzer JUMAN. In Proceedings of the International Workshop on Sharable Natural Language, pages 22-28.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generating Typed Dependency Parses from Phrase Structure Parses",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC 2006",
"volume": "",
"issue": "",
"pages": "449--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Bill MacCartney and Christopher D. Manning. 2006. Generating Typed Depend- ency Parses from Phrase Structure Parses. In Proceedings of LREC 2006, pages 449-454.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Minimum Error Rate Training in Statistical Machine Translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160-167.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "KyotoEBMT System Description for the 2nd Workshop on Asian Translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Fabien",
"middle": [],
"last": "Cromi\u00e8res",
"suffix": ""
},
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "54--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Richardson, Raj Dabre, Chenhui Chu, Fabien Cromi\u00e8res, Toshiaki Nakazawa, and Sadao Kurohashi. 2015. KyotoEBMT System Description for the 2nd Workshop on Asian Translation. In Proceedings of the 2nd Work- shop on Asian Translation, pages 54-60.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Reranking Approach for Dependency Parsing with Variable-sized Subtree Features",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of 26th Pacific Asia Conference on Language Information and Computing",
"volume": "",
"issue": "",
"pages": "308--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Shen, Daisuke Kawahara, and Sadao Kurohashi. 2012. A Reranking Approach for Dependency Parsing with Variable-sized Subtree Features. In Proceedings of 26th Pacific Asia Conference on Language Information and Computing, pages 308-317.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Chinese Morphological Analysis with Character-level POS Tagging",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Hongxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "253--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Shen, Hongxiao Liu, Daisuke Kawahara, and Sadao Kurohashi. 2014. Chinese Morphological Analysis with Character-level POS Tagging. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 253-258.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Stacked Sub-word Model for Joint Chinese Word Segmentation and Part-of-speech Tagging",
"authors": [
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL-HLT",
"volume": "",
"issue": "",
"pages": "1385--1394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiwei Sun. 2011. A Stacked Sub-word Model for Joint Chinese Word Segmentation and Part-of-speech Tagging. In Proceedings of ACL-HLT, pages 1385-1394.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Segmentation Guidelines for the Penn Chinese Treebank",
"authors": [
{
"first": "Fie",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fie Xia. 2000. The Segmentation Guidelines for the Penn Chinese Treebank (3.0). http://www.cis.upenn.edu/~chi- nese/segguide.3rd.ch.pdf.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Chinese Parsing Exploiting Characters",
"authors": [
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "125--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2013. Chinese Parsing Exploiting Characters. In Pro- ceedings of ACL, page 125-134.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A Fast Decoder for Joint Word Segmentation and POS-tagging Using a Single Discriminative Model",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "843--852",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2010. A Fast Decoder for Joint Word Segmentation and POS-tagging Using a Single Discriminative Model. In Proceedings of EMNLP, pages 843-852.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "",
"type_str": "figure"
},
"TABREF0": {
"text": "Disyllabic character-level POS patterns.",
"html": null,
"content": "<table><tr><td>POS Pattern</td><td>Example</td></tr><tr><td>pronoun + noun</td><td>\u6211\u6821 (this university)</td></tr><tr><td>locative + noun</td><td>\u540e\u95e8 (back door)</td></tr><tr><td>locative + verb</td><td>\u524d\u8ff0 (described above)</td></tr><tr><td>noun + locative</td><td>\u5ba4\u5185 (indoor)</td></tr><tr><td>pronoun + locative</td><td>\u6b64\u5916 (besides)</td></tr><tr><td>adverb + verb</td><td>\u731d\u6b7b (sudden death)</td></tr><tr><td>noun + noun</td><td>\u5382\u623f (factory plant)</td></tr><tr><td>noun + measure</td><td>\u8f66\u8f86 (vehicles)</td></tr><tr><td>adjective + noun</td><td>\u4f73\u917f (wines)</td></tr><tr><td>adjective + measure</td><td>\u9ad8\u5c42 (high level)</td></tr><tr><td>verb + verb</td><td>\u62bd\u53d6 (extract)</td></tr><tr><td>verb + particle</td><td>\u5199\u5b8c (finish writing)</td></tr><tr><td>verb + adjective</td><td>\u6253\u788e (break)</td></tr><tr><td>verb + locative</td><td>\u7efc\u4e0a (accordingly)</td></tr><tr><td>verb + noun</td><td>\u8f9e\u804c (resign)</td></tr><tr><td>adjective + adjective</td><td>\u4f18\u96c5 (elegant)</td></tr><tr><td>adverb + adjective</td><td>\u6700\u65b0 (latest)</td></tr><tr><td>determiner + noun</td><td>\u5404\u754c (all walks of life)</td></tr><tr><td colspan=\"2\">determiner + temporal \u7fcc\u65e5 (the next day)</td></tr></table>",
"type_str": "table",
"num": null
}
}
}
}