ACL-OCL / Base_JSON /prefixO /json /O11 /O11-1007.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O11-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:05:35.583823Z"
},
"title": "Unsupervised Overlapping Feature Selection for Conditional Random Fields Learning in Chinese Word Segmentation",
"authors": [
{
"first": "Ting-Hao",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {},
"email": "tinghaoyang@iis.sinica.edu.tw"
},
{
"first": "Tian-Jian",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing-Hua University",
"location": {}
},
"email": "tmjiang@iis.sinica.edu.tw"
},
{
"first": "Chan-Hung",
"middle": [],
"last": "Kuo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Richard Tzong-Han",
"middle": [],
"last": "Tsai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yuan Ze University",
"location": {}
},
"email": "thtsai@saturn.yzu.edu.tw"
},
{
"first": "Wen-Lian",
"middle": [],
"last": "Hsu",
"suffix": "",
"affiliation": {},
"email": "hsu@iis.sinica.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This work represents several unsupervised feature selections based on frequent strings that help improve conditional random fields (CRF) model for Chinese word segmentation (CWS). These features include character-based N-gram (CNG), Accessor Variety based string (AVS), and Term Contributed Frequency (TCF) with a specific manner of boundary overlapping. For the experiment, the baseline is the 6-tag, a state-of-the-art labeling scheme of CRF-based CWS; and the data set is acquired from SIGHAN CWS bakeoff 2005. The experiment results show that all of those features improve our system's F 1 measure (F) and Recall of Out-of-Vocabulary (R OOV). In particular, the feature collections which contain AVS feature outperform other types of features in terms of F, whereas the feature collections containing TCB/TCF information has better R OOV .",
"pdf_parse": {
"paper_id": "O11-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "This work represents several unsupervised feature selections based on frequent strings that help improve conditional random fields (CRF) model for Chinese word segmentation (CWS). These features include character-based N-gram (CNG), Accessor Variety based string (AVS), and Term Contributed Frequency (TCF) with a specific manner of boundary overlapping. For the experiment, the baseline is the 6-tag, a state-of-the-art labeling scheme of CRF-based CWS; and the data set is acquired from SIGHAN CWS bakeoff 2005. The experiment results show that all of those features improve our system's F 1 measure (F) and Recall of Out-of-Vocabulary (R OOV). In particular, the feature collections which contain AVS feature outperform other types of features in terms of F, whereas the feature collections containing TCB/TCF information has better R OOV .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many intelligent text processing tasks such as information retrieval, text-to-speech and machine translation assume the ready availability of a tokenization into words, which is relatively straightforward in languages with word delimiters (e.g. space), while a little difficult for Asian languages such as Chinese and Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Chinese word segmentation (CWS) is an essential pre-work for Chinese text processing applications and it has been an active area of research in computational linguistics for two decades. SIGHAN, the Special Interest Group for Chinese Language Processing of the Association for Computational Linguistics, conducted five word segmentation bakeoffs (Sproat and Emerson, 2003; Emerson, 2005; Levow, 2006; Jin and Chen, 2007; Zhao and Liu, 2010) . After years of intensive researches, CWS has achieved high precision, but the issue of out-of-vocabulary word handling still remains.",
"cite_spans": [
{
"start": 346,
"end": 372,
"text": "(Sproat and Emerson, 2003;",
"ref_id": "BIBREF9"
},
{
"start": 373,
"end": 387,
"text": "Emerson, 2005;",
"ref_id": "BIBREF14"
},
{
"start": 388,
"end": 400,
"text": "Levow, 2006;",
"ref_id": "BIBREF16"
},
{
"start": 401,
"end": 420,
"text": "Jin and Chen, 2007;",
"ref_id": "BIBREF18"
},
{
"start": 421,
"end": 440,
"text": "Zhao and Liu, 2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "1.1"
},
{
"text": "Traditional approaches for CWS adopted dictionary and rules to segment unlabeled texts (c.f. Ma and Chen, 2003) . In recent years, the mainstream is to use statistical machine learning models, especially the Conditional Random Fields (CRF) (Lafferty et al, 2001 ), which shows a moderate performance for sequential labeling problem and achieves competitive results with character position based methods ).",
"cite_spans": [
{
"start": 93,
"end": 111,
"text": "Ma and Chen, 2003)",
"ref_id": "BIBREF10"
},
{
"start": 240,
"end": 261,
"text": "(Lafferty et al, 2001",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The State of the Art of CWS",
"sec_num": "1.2"
},
{
"text": "For incorporating unsupervised feature selections into character position based CRF for CWS, Zhao and Kit (2006; 2007) tried strings based on Accessor Variety (AV), which was developed by Feng et al. (2004) , and co-occurrence strings (COS). Jiang et al. (2010) applied a feature similar to COS, called Term Contributed Boundary (TCB). Tsai (2010) employ statistical association measures non-parametrically through a natural but novel feature representation scheme. Those unsupervised feature selection are based on frequent strings extracted automatically from unlabeled corpora. They are suitable for closed training evaluation that any external resource or extra information is not allowed. Without proper knowledge, the closed training evaluation of word segmentation can be difficult with Out-of-Vocabulary (OOV) words, where frequent strings collected from the test data may help.",
"cite_spans": [
{
"start": 93,
"end": 112,
"text": "Zhao and Kit (2006;",
"ref_id": null
},
{
"start": 113,
"end": 118,
"text": "2007)",
"ref_id": "BIBREF18"
},
{
"start": 188,
"end": 206,
"text": "Feng et al. (2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised CRF Feature Selections for CWS",
"sec_num": "1.3"
},
{
"text": "According to Zhao and Kit (2008) , AV-based string (AVS) is one of the most effective unsupervised feature selection for CWS by character position based CRF. This motivates us to seek for explanations for AVS's success. We suspect that AVS is designed to keep overlapping strings but COS/TCB is usually selected with its longest-first nature before integrated into CRF. Hence, we conduct a series of experiments to examine this hypothesis.",
"cite_spans": [
{
"start": 13,
"end": 32,
"text": "Zhao and Kit (2008)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised CRF Feature Selections for CWS",
"sec_num": "1.3"
},
{
"text": "The remainder of the article is organized as follows. Section 2 briefly introduces CRF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised CRF Feature Selections for CWS",
"sec_num": "1.3"
},
{
"text": "Common unsupervised feature selections based on the concept of frequent strings are explained in Section 3. Section 4 discusses related works. Section 5 describes the design of labeling scheme, feature templates and a framework that is able to encode those overlapping features in a unified way. Details about the experiment are reported in Section 6. Finally, the conclusion is in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised CRF Feature Selections for CWS",
"sec_num": "1.3"
},
{
"text": "Conditional random fields (CRF) are undirected graphical models trained to maximize a conditional probability of random variables X and Y, and the concept is well established for sequential labeling problem (Lafferty et al., 2001) . Given an input sequence (or observation sequence) ",
"cite_spans": [
{
"start": 207,
"end": 230,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "\uf8f7 \uf8f8 \uf8f6 \uf8ec \uf8ed \uf8eb = \u2211\u2211 = \u2212 T t k t t k k t X y y f Z X Y P 1 1 X ) , , , ( exp 1 ) | ( \u03bb \u03bb (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "where Z X is the normalization constant that makes probability of all label sequences sum to one, ( )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "t X y y f t t k , , , 1 \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "is a feature function which is often binary valued, but can be real valued, and k \u03bb is a learned weight associated with feature .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "The feature functions can measure any aspect of state transition Given such a model as defined in Equation (1), the most probable labeling sequence for an input sequence X is as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": ") | ( argmax * X Y P y Y \u039b = .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "(2) Equation (2) can be efficiently calculated by dynamic programming using Viterbi algorithm. The more details about concepts of CRF and learning parameters could found in (Wallach, 2004) . For sequential labeling tasks like CWS, a linear-chain CRF is currently one of the most popular choices. where many character-based N-grams can be extracted, but some of them are out of context, such as \"\u7136\u79d1\" (so; discipline) and \"\u5b78\u7684\" (study; of), even when they are relatively frequent,. For the purpose of interpreting overlapping behavior of frequent strings, however, character-based N-grams could still be useful for baseline analysis and implementation.",
"cite_spans": [
{
"start": 173,
"end": 188,
"text": "(Wallach, 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "The lack of correct information about the actual boundary and frequency of a multi-character/word expression has been researched in different languages. The distortion of phrase boundaries and frequencies was first observed in the Vodis Corpus when the word-based bigram \"RAIL ENQUIRIES\" and word-based trigram \"BRITISH RAIL ENQUIRIES\" were estimated and reported (O'Boyle, 1993; Ha et al., 2005) . Both of them occur 73 times, which is a large number for such a small corpus. \"ENQUIRIES\" follows \"RAIL\" with a very high probability when \"BRITISH\" precede it. However, when \"RAIL\" is preceded by words other than \"BRITISH,\" \"ENQUIRIES\" does not occur, but words like \"TICKET\" or \"JOURNEY\" may. Thus, the bigram \"RAIL ENQUIRIES\" gives a misleading probability that \"RAIL\" is followed by \"ENQUIRIES\" irrespective of what precedes it.",
"cite_spans": [
{
"start": 380,
"end": 396,
"text": "Ha et al., 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reduced N-gram",
"sec_num": "3.2"
},
{
"text": "A common solution to this problem is that if some N-grams consist of others, then the frequencies of the shorter ones have to be discounted with the frequencies of the longer ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reduced N-gram",
"sec_num": "3.2"
},
{
"text": "For Chinese, Lin and Yu (2001) reported a similar problem and its corresponding solution in the sense of reduced N-gram of Chinese character. By excluding N-grams with their numbers of appearance that fully depend on other super-sequences, \"\u7136\u79d1\" and \"\u5b78\u7684\" from the sample texts in the previous sub-section are not candidates of string anymore. Zhao and Kit (2007) described the same concept briefly as co-occurrence string (COS). Sung et al. (2008) invented a specific data structure for suffix array algorithm to calculate exact boundaries of phrase-alike string and their frequencies called term-contributed boundaries (TCB) and term-contributed frequencies (TCF), respectively, to analogize similarities and differences with the term frequencies. Since we use the program of TCB/TCF for experiment within this study, the family of reduced N-gram will be referred as TCB hereafter for convenience. According to Zhao and Kit (2007) , AV and BE both assume that the border of a potential Chinese word is located where the uncertainty of successive character increases. They believe that AV and BE are the discrete and continuous version, respectively, of a fundamental work of Harris (1970), and then decided to adopt AVS as unsupervised feature selection for CRF-based CWS. We follow their choice in hope of producing a comparable study. AV of a string s is defined as",
"cite_spans": [
{
"start": 342,
"end": 361,
"text": "Zhao and Kit (2007)",
"ref_id": null
},
{
"start": 428,
"end": 446,
"text": "Sung et al. (2008)",
"ref_id": "BIBREF20"
},
{
"start": 911,
"end": 930,
"text": "Zhao and Kit (2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reduced N-gram",
"sec_num": "3.2"
},
{
"text": ")} ( ), ( min{ ) ( s R s L s AV av av = .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uncertainty of Succeeding Character",
"sec_num": "3.3"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uncertainty of Succeeding Character",
"sec_num": "3.3"
},
{
"text": "In Equation 3, L av (s) and R av (s) are defined as the number of distinct preceding and succeeding characters, respectively, except if the adjacent character has been absent because of sentence boundary, then the pseudo-character of sentence beginning or sentence ending will be accumulated indistinctly. Feng et al. (2004) also developed more heuristic rules to remove strings that contain known words or adhesive characters. For the strict meaning of unsupervised feature selection and for the sake of simplicity, those additional rules are dropped in this study.",
"cite_spans": [
{
"start": 306,
"end": 324,
"text": "Feng et al. (2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uncertainty of Succeeding Character",
"sec_num": "3.3"
},
{
"text": "This section briefly describes the following three related works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other Related Works",
"sec_num": "4."
},
{
"text": "Besides papers of TCB/TCF extraction (Sung et al., 2008) , Chinese frequent strings (Lin et al., 2001) ",
"cite_spans": [
{
"start": 37,
"end": 56,
"text": "(Sung et al., 2008)",
"ref_id": "BIBREF20"
},
{
"start": 84,
"end": 102,
"text": "(Lin et al., 2001)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Frequent String Extraction Algorithm",
"sec_num": "4.1"
},
{
"text": "In this study, the 6-tag approach (Zhao et al., 2010) is adopted as our formulation, which This configuration of CRF without any additional unsupervised feature selection is also the control group of the experiment. Table 1 provides a sample of labeled training data. Table 1 . A Sample of the 6-tag Labels",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 223,
"text": "Table 1",
"ref_id": null
},
{
"start": 268,
"end": 275,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Character Position Based Labels",
"sec_num": "5.1"
},
{
"text": "Character Label \u53cd B 1 \u800c E \u6703 S \u6b32 B 1 \u901f B 2 \u5247 B 3 \u4e0d M \u9054 E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character Position Based Labels",
"sec_num": "5.1"
},
{
"text": "For the sample text \"\u53cd\u800c (contrarily) / \u6703 (make) / \u6b32\u901f\u5247\u4e0d\u9054 (more haste, less speed)\" (on the contrary, haste makes waste), the tag B 1 stands for the beginning character of a word, while B 2 and B 3 represent for the second character and the third character of a word, respectively. The ending character of a word is tagged as E. Once a word consists of more than four characters, the tag for all the middle characters between B 3 and E is M. Finally, the tag S is reserved for single-character words specifically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character Position Based Labels",
"sec_num": "5.1"
},
{
"text": "Feature instances are generated from templates based on the work of Ratnaparkhi (1996). Table 1 , if the current position is at the label M, features generated by C -1 , C 0 and C 1 are \"\u5247,\" \"\u4e0d\" and \"\u9054,\" respectively. Meanwhile, for window size 2, C -1 C 0 , C 0 C 1 and C -1 C 1 expands features of the label M to \"\u5247\u4e0d,\" \"\u4e0d\u9054\" and \"\u5247\u9054,\"",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 95,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "5.2"
},
{
"text": "respectively. According to , the context window size in three tokens is effective to catch parameters of 6-tag approach for most strings not longer than five characters. Our pilot test for this case, however, shows that context window size in two tokens would be sufficient without significant performance decreasing. We also intentionally avoid using feature templates that determine character types like alphabet, digit, punctuation, date/time and other non-Chinese characters, to stay with the strict protocol of closed training and unsupervised learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "5.2"
},
{
"text": "Unsupervised feature selections that will be introduced in the next sub-section are of course generated by the same template, except the binding target moves column by column as listed in tables of the next sub-section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "5.2"
},
{
"text": "By default, CRF++ generates features not only for the prediction label at the current position, but also for combinations of the prediction label at both the previous and the current position, which should not be confused with the context window size mentioned above. The logarithm ranking mechanism in Equation (4) is inspired by Zipf's law with the intention to alleviate the potential data sparseness problem of infrequent strings. The rank r and the corresponding character positions of a string are then concatenated as feature tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Templates",
"sec_num": "5.2"
},
{
"text": "To give the reader a clearer picture about what feature tokens look like, a sample representation for CNG, AVS or TCB is demonstrated and explained by Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "A Unified Feature Representation for CNG, AVS and TCB",
"sec_num": "5.3"
},
{
"text": "For example, judging by strings with two characters, one of the strings \"\u53cd\u800c\" gets rank r = 3 , therefore the column of two-character feature tokens has \"\u53cd\" denoted as 3B 1 and \"\u800c\" denoted as 3E. If another two-character string \"\u800c\u6703\" competes with \"\u53cd\u800c\" at the position of \"\u800c\" with a lower rank r = 0, then 3E is selected for feature representation of the token at a certain position. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Unified Feature Representation for CNG, AVS and TCB",
"sec_num": "5.3"
},
{
"text": "\u53cd 5S 3B 1 4B 1 0B 1 0B 1 B 1 \u800c 6S 3E 4B 2 0B 2 0B 2 E \u6703 6S 0E 4E 0B 3 0B 3 S \u6b32 4S 0E 0E 0E 0M B 1 \u901f 4S 0E 0E 0E 0E B 2 \u5247 6S 3B 1 0E 0E 0E B 3 \u4e0d 7S 3E 0E 0E 0E M \u9054 5S 3E 0E 0E 0E E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Unified Feature Representation for CNG, AVS and TCB",
"sec_num": "5.3"
},
{
"text": "Note that when the string \"\u5247\u4e0d\" conflicts with the string \"\u4e0d\u9054\" at the position of \"\u4e0d\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Unified Feature Representation for CNG, AVS and TCB",
"sec_num": "5.3"
},
{
"text": "with the same rank r = 3, the corresponding character position with rank of the leftmost string, which is 3E in this case, is applied arbitrarily.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Unified Feature Representation for CNG, AVS and TCB",
"sec_num": "5.3"
},
{
"text": "Although those are indeed common situations of overlapping strings, we simply inherit the above rules by Zhao and Kit (2008) for the sake of compatibility. In fact, we have done a pilot test with a more complicated representation like 3E-0B 1 for \"\u800c\" and 3E-3B 1 for \"\u4e0d\" to keep the overlapping information within each column, but the test result shows no significant differences in terms of performance. Since the statistics of the pilot test could be considerably redundant, they are omitted in this paper.",
"cite_spans": [
{
"start": 105,
"end": 124,
"text": "Zhao and Kit (2008)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Unified Feature Representation for CNG, AVS and TCB",
"sec_num": "5.3"
},
{
"text": "To make an informative comparison, we also apply the original version of non-overlapping COS/TCB feature that is selected by forward maximum matching algorithm and without ranks (Zhao and Kit, 2007; Jiang et al., 2010) . The following table illustrates a sample representation of features for this case. ",
"cite_spans": [
{
"start": 178,
"end": 198,
"text": "(Zhao and Kit, 2007;",
"ref_id": null
},
{
"start": 199,
"end": 218,
"text": "Jiang et al., 2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Unified Feature Representation for CNG, AVS and TCB",
"sec_num": "5.3"
},
{
"text": "B 1 B 1 \u800c B 2 E \u6703 E S \u6b32 -1 B 1 \u901f -1 B 2 \u5247 -1 B 3 \u4e0d -1 M \u9054 -1 E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Unified Feature Representation for CNG, AVS and TCB",
"sec_num": "5.3"
},
{
"text": "Note that there are several features encoded as -1 individually to represent that the desired string is unseen. For the family of reduced N-grams, such as COS or TCB, it means that either the string is always occupied by other super-strings or simply does not appear more than once.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Unified Feature Representation for CNG, AVS and TCB",
"sec_num": "5.3"
},
{
"text": "The length of a string is limited to five characters for the sake of efficiency and consistency with the 6-tag approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Unified Feature Representation for CNG, AVS and TCB",
"sec_num": "5.3"
},
{
"text": "The version 0.54 of the CRF++ employs L-BFGS optimization and the tunable hyper-parameter, i.e. the Gaussian prior, set to 100 throughout the whole experiment. To estimate the differences of performance between configurations of CWS experiment, this work uses the confidence level, which has been applied since SIGHAN CWS bakeoff 2003 (Sproat et al., 2003) , that assume the recall (or precision) X of accuracy (or OOV recognition) represents the probability that a word (or OOV word) will be identified from N words in total, and that a binomial distribution is appropriate for the experiment. Confidence levels of P, R, and R OOV appear in Table 5 under the column C P , C R , and C Roov , respectively, are calculated at the 95% confidence interval with the formula \u00b12\u221a([X(1-X)]/N). Two configurations of CWS experiment are then considered to be statistically different at a 95% confidence level if one of their C P , C R , or C Roov is different.",
"cite_spans": [
{
"start": 335,
"end": 356,
"text": "(Sproat et al., 2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 642,
"end": 649,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "6."
},
{
"text": "The most significant type of error is unintentionally segmented alphanumeric sequences, such as English words or factoids in Arabic numerals. Rather than developing another set of feature templates for those non-Chinese characters that may violate rules of closed training evaluation, a post-processing, which is mentioned in the official report of SIGHAN CWS bakeoff 2005 (Emerson, 2005) , has been applied to remove spaces between non-Chinese characters in the gold standard data manually, since there are no urgent expectations of correct segmentation on non-Chinese text. Table 5 lists the statistics after the post-processing.",
"cite_spans": [
{
"start": 373,
"end": 388,
"text": "(Emerson, 2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 576,
"end": 583,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6.4"
},
{
"text": "Further discussions are mainly based on this post-processed result without loss of generality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6.4"
},
{
"text": "Numbers in bold face and bold-italic style indicate the best and the second-best results of a certain evaluation metric, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6.4"
},
{
"text": "Statistics show clear trends that the feature collections which contain AVS outperforms other types of unsupervised feature selections on F, and the feature collections containing TCB/TCF information usually has better R OOV . It has been observed that using any of the unsupervised feature selections could create short patterns for CRF learner, which might break more English words than using the 6-tag approach solely. AVS, TCF and TCB, however, resolve more overlapping ambiguities of Chinese words than the 6-tag approach and CNG. Interestingly, even for the unsupervised feature selection without rank and overlapping information, TCB successfully recognizes \"\u4f9d \u9760 / \u5355\u4f4d / \u7684 / \u7ebd\u5e26 / \u6765 / \u7ef4\u6301,\" while the 6-tag approach see this phrase incorrectly as \"\u4f9d\u9760 / \u5355\u4f4d / \u7684 / \u7ebd / \u5e26\u6765 / \u7ef4\u6301.\" TCB also saves more factoids, such as \"\u4e00\u4e8c\u4e5d\uff0e\u4e5d / \u5de6\u53f3\" (around 129.9) from scattered tokens, such as \"\u4e00\u4e8c\u4e5d / \uff0e / \u4e5d \u5de6\u53f3\" (129 point 9 around).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6.4"
},
{
"text": "The above observations suggest that the quality of a string as a word-alike candidate should be an important factor for unsupervised feature selection injected CRF learner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6.4"
},
{
"text": "Relatively speaking, CNG probably brings in too much noise. Non-overlapping COS/TCB seems to be a moderate choice with a lower training cost of CRF than those of other overlapping features. This confirms our hypothesis at the end of Section 1.3 that, including overlapping information as an unsupervised feature selection may help improving CWS performance of supervised labeling scheme of CRF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6.4"
},
{
"text": "This paper provides a study about CRF-based CWS integrated with unsupervised and overlapping feature selections. The experiment results show that the feature collections which contain AVS obtains better performance in terms of F 1 measure score, and TCB/TCF enhances the 6-tag approach on the Recall of Out-of-Vocabulary. In the future, we will search for a hybrid method that utilizes information both inside and outside Chinese words simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Works",
"sec_num": "7."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Study of an N-Gram Language Model for Speech Recognition",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boyle",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter O'Boyle, \"A Study of an N-Gram Language Model for Speech Recognition\", PhD Thesis, Queen's University Belfast , 1993.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Identification of Unknown Words from Corpus",
"authors": [
{
"first": "His-Jian",
"middle": [],
"last": "Cheng-Huang Tung",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Proceedings of Chinese and Oriental Languages",
"volume": "8",
"issue": "",
"pages": "131--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng-Huang Tung and His-Jian Lee, \"Identification of Unknown Words from Corpus\", Computational Proceedings of Chinese and Oriental Languages, vol.8, pp.131-145, 1994.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An Unsupervised Iterative Method for Chinese New Lexicon Extraction",
"authors": [
{
"first": "Jing-Shin",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Keh-Yih",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics and Chinese Language Processing",
"volume": "2",
"issue": "",
"pages": "97--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing-Shin Chang and Keh-Yih Su, \"An Unsupervised Iterative Method for Chinese New Lexicon Extraction\", Computational Linguistics and Chinese Language Processing, vol.2, no.2, pp.97-148, 1997.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using Character Bigram for Ambiguity Resolution In Chinese Word Segmentation (In Chinese)",
"authors": [
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Changning",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"K"
],
"last": "Tsou",
"suffix": ""
}
],
"year": 1997,
"venue": "Computer Research and Development",
"volume": "34",
"issue": "5",
"pages": "332--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maosong Sun, Changning Huang, Benjamin K.Tsou, \"Using Character Bigram for Ambiguity Resolution In Chinese Word Segmentation (In Chinese)\", Computer Research and Development, vol.34, no.5, pp.332-339, 1997.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "PAT-tree-based Keyword Extraction for Chinese Information Retrieval",
"authors": [
{
"first": "Lee-Feng",
"middle": [],
"last": "Chien",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 20th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "50--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee-Feng Chien, \"PAT-tree-based Keyword Extraction for Chinese Information Retrieval\", in Proceedings of the 20th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.50-58, 1997.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Conditional Random Fields Probabilistic Models for Segmenting and Labeling Sequence Data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "591--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, Fernando Pereira, \"Conditional Random Fields Probabilistic Models for Segmenting and Labeling Sequence Data\", in Proceedings of International Conference on Machine Learning, pp.591-598, 2001.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Extracting Chinese Frequent Strings without a Dictionary from a Chinese Corpus and its Applications",
"authors": [
{
"first": "Yih-Jeng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ming-Shing",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Information Science and Engineering",
"volume": "17",
"issue": "",
"pages": "805--824",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yih-Jeng Lin, Ming-Shing Yu, \"Extracting Chinese Frequent Strings without a Dictionary from a Chinese Corpus and its Applications\", Journal of Information Science and Engineering 17, pp.805-824, 2001.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SRILM -An Extensible Language Modeling Toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "the Proceedings of Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke, \"SRILM -An Extensible Language Modeling Toolkit\", in the Proceedings of Spoken Language Processing, pp.901-904, 2002.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised Training for Overlapping Ambiguity Resolution in Chinese Word Segmentation",
"authors": [
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2003,
"venue": "the Proceedings of The Second SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mu Li, Jianfeng Gao, Chang-Ning Huang, \"Unsupervised Training for Overlapping Ambiguity Resolution in Chinese Word Segmentation\", in the Proceedings of The Second SIGHAN Workshop on Chinese Language Processing, pp.1-7, 2003.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The First International Chinese Word Segmentation Bakeoff",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Emerson",
"suffix": ""
}
],
"year": 2003,
"venue": "the Proceedings of The Second SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "113--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Sproat, Thomas Emerson, \"The First International Chinese Word Segmentation Bakeoff\", in the Proceedings of The Second SIGHAN Workshop on Chinese Language Processing, Sapporo, July 11-12, 2003, pp.113-143.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Introduction to CKIP Chinese Word Segmentation System for the First International Chinese Word Segmentation Bakeoff",
"authors": [
{
"first": "Wei-Yun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2003,
"venue": "the Proceedings of Second SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "168--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Yun Ma, Keh-Jiann Chen, \"Introduction to CKIP Chinese Word Segmentation System for the First International Chinese Word Segmentation Bakeoff\", in the Proceedings of Second SIGHAN Workshop on Chinese Language Processing, pp.168-171, 2003.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Accessor Variety Criteria for Chinese Word Extraction",
"authors": [
{
"first": "Haodi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaotie",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Wiemin",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "1",
"pages": "75--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haodi Feng, Kang Chen, Xiaotie Deng, and Wiemin Zheng, \"Accessor Variety Criteria for Chinese Word Extraction\", Computational Linguistics, vol.30, no.1, pp.75-93, 2004.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Conditional Random Fields An Introduction",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hanna",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanna M. Wallach, \"Conditional Random Fields An Introduction\", Department of Computer and Information Science, University of Pennsylvania, Tech. Rep. MS-CIS-04-21, 2004.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Reduced N-Grams for Chinese Evaluation",
"authors": [
{
"first": "Le Quan",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Rowan",
"middle": [],
"last": "Seymour",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Hanna",
"suffix": ""
},
{
"first": "Francis",
"middle": [
"J"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics and Chinese Language Processing",
"volume": "10",
"issue": "",
"pages": "19--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le Quan Ha, Rowan Seymour, Philip Hanna and Francis J. Smith, \"Reduced N-Grams for Chinese Evaluation\", Computational Linguistics and Chinese Language Processing, vol.10, no.1, pp.19-34, 2005.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Second International Chinese Word Segmentation Bakeoff",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Emerson",
"suffix": ""
}
],
"year": 2005,
"venue": "the Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "123--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Emerson, \"The Second International Chinese Word Segmentation Bakeoff\", in the Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pp.123-133, 2005.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Statistical Substring Reduction in Linear Time",
"authors": [
{
"first": "Xueqiang",
"middle": [],
"last": "L\u00fc",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "the Proceedings of the 1st International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "320--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xueqiang L\u00fc, Le Zhang, \"Statistical Substring Reduction in Linear Time\", in the Proceedings of the 1st International Joint Conference on Natural Language Processing, pp.320-327, 2005.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Third International Chinese Language Processing Bakeoff Word Segmentation and Named Entity Recognition",
"authors": [
{
"first": "Gina-Anne",
"middle": [],
"last": "Levow",
"suffix": ""
}
],
"year": 2006,
"venue": "the Proceedings of The Fifth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "108--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gina-Anne Levow, \"The Third International Chinese Language Processing Bakeoff Word Segmentation and Named Entity Recognition\", in the Proceedings of The Fifth SIGHAN Workshop on Chinese Language Processing, pp.108-117, 2006.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Subword-based Tagging for Confidence-dependent Chinese Word Segmentation",
"authors": [
{
"first": "Ruiqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Genichiro",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2006,
"venue": "the Proceedings of COLING/ACL",
"volume": "",
"issue": "",
"pages": "961--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiqiang Zhang, Genichiro Kikui, Eiichiro Sumita, \"Subword-based Tagging for Confidence-dependent Chinese Word Segmentation\", in the Proceedings of COLING/ACL, pp.961-968, 2006.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Fourth International Chinese Language Processing Bakeoff : Chinese Word Segmentation, Named Entity Recognition and Chinese POS Tagging",
"authors": [
{
"first": "Guangjin",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2007,
"venue": "the Proceedings of The Sixth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "69--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guangjin Jin, Xiao Chen, \"The Fourth International Chinese Language Processing Bakeoff : Chinese Word Segmentation, Named Entity Recognition and Chinese POS Tagging\", in the Proceedings of The Sixth SIGHAN Workshop on Chinese Language Processing, pp.69-81, 2007.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Statistical Properties of Overlapping Ambiguities in Chinese Word Segmentation and a Strategy for Their Disambiguation",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Qiao",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Menzel",
"suffix": ""
}
],
"year": 2008,
"venue": "the Proceedings of Text, Speech and Dialogue",
"volume": "",
"issue": "",
"pages": "177--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Qiao, Maosong Sun, Wolfgang Menzel, \"Statistical Properties of Overlapping Ambiguities in Chinese Word Segmentation and a Strategy for Their Disambiguation\", in the Proceedings of Text, Speech and Dialogue, pp.177-186, 2008.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Compute the Term Contributed Frequency",
"authors": [
{
"first": "Cheng-Lung",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Wen-Lian",
"middle": [],
"last": "Hsu-Chun Yen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2008,
"venue": "the Proceedings of The Eighth International Conference on Intelligent Systems Design and Applications",
"volume": "",
"issue": "",
"pages": "325--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng-Lung Sung, Hsu-Chun Yen, Wen-Lian Hsu, \"Compute the Term Contributed Frequency\", in the Proceedings of The Eighth International Conference on Intelligent Systems Design and Applications, pp.325-328, 2008.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Exploiting Unlabeled Text with Different Unsupervised Segmentation Criteria for Chinese Word Segmentation",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2008,
"venue": "the Proceedings of The Ninth International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "17--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Chunyu Kit, \"Exploiting Unlabeled Text with Different Unsupervised Segmentation Criteria for Chinese Word Segmentation\", in the Proceedings of The Ninth International Conference on Intelligent Text Processing and Computational Linguistics, pp.17-23, 2008.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Unified Character-Based Tagging Framework for Chinese Word Segmentation",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2010,
"venue": "ACM Transactions on Asian Language Information Processing",
"volume": "9",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Chang-Ning Huang, Mu Li, Lu, Bao-Liang Lu, \"A Unified Character-Based Tagging Framework for Chinese Word Segmentation\", ACM Transactions on Asian Language Information Processing, vol.9, no.2, 2010.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "the Proceedings of The First CIPS-SIGHAN Joint Conference on Chinese Language Processing",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "199--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Qun Liu, \"The CIPS-SIGHAN CLP2010 Chinese Word Segmentation Backoff\", in the Proceedings of The First CIPS-SIGHAN Joint Conference on Chinese Language Processing, pp.199-209, 2010.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Chinese text segmentation: A hybrid approach using transductive learning and statistical association measures",
"authors": [
{
"first": "Richard Tzong-Han",
"middle": [],
"last": "Tsai",
"suffix": ""
}
],
"year": 2010,
"venue": "Expert Systems with Applications",
"volume": "37",
"issue": "5",
"pages": "3553--3560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Tzong-Han Tsai, \"Chinese text segmentation: A hybrid approach using transductive learning and statistical association measures\", Expert Systems with Applications, vol.37, no.5, pp.3553-3560, 2010.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Term Contributed Boundary Tagging by Conditional Random Fields for SIGHAN",
"authors": [
{
"first": "Tian-Jian",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Shih-Hung",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Cheng-Lung",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Wen-Lian",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tian-Jian Jiang, Shih-Hung Liu, Cheng-Lung Sung and Wen-Lian Hsu, \"Term Contributed Boundary Tagging by Conditional Random Fields for SIGHAN 2010",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "the Proceedings of the CIPS-SIGHAN Joint Conference on Chinese Language Processing",
"authors": [
{
"first": "Chinese",
"middle": [],
"last": "Word Segmentation",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bakeoff",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "266--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinese Word Segmentation Bakeoff\", in the Proceedings of the CIPS-SIGHAN Joint Conference on Chinese Language Processing, Beijing, China, pp.266-269, August 28-29, 2010.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": ", a conditional probability of linear-chain CRF with parameters { } n",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "and the entire observation sequence X centered at the current position t.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Character-based N-gram The word boundary and the word frequency are the standard notions of frequency in corpus-based natural language processing. Word-based N-gram is an intuitive and effective solution of language modeling. For languages without explicit word boundary such as Chinese, character-based N-gram (CNG) is usually insufficient. For example, consider the following sample texts in Chinese \uf06c \"\u81ea\u7136\u79d1\u5b78\u7684\u91cd\u8981\u6027\" (the importance of natural science); \uf06c \"\u81ea\u7136\u79d1\u5b78\u7684\u7814\u7a76\u662f\u552f\u4e00\u7684\u9014\u5f91\" (natural science research is the only way).",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "et al. (2004) proposed Accessor Variety (AV) to measure how likely a string is a Chinese word. Another measurement called Boundary Entropy or Branching Entropy (BE) exists in some works (Tung and Lee, 1994; Chang and Su, 1997; Cohen and Adams, 2001; Cohen et al., 2002; Huang and Powers, 2003; Tanaka-Ishii, 2005; Jin and Tanaka-Ishii, 2006;Cohen et al., 2006). The basic idea behind those measurements is closely related to one particular perspective of N-gram and information theory as cross-entropy or Perplexity.",
"uris": null
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"text": "achieves a very competitive performance recently, and is one of the most fine-grained character-position-based labeling schemes. According to Zhao et al. (2010), since less than 1% Chinese words are longer than five characters in most corpora from SIGHAN CWS bakeoffs 2003, 2005, 2006 and 2007, the coverage of 6-tag approach should be good enough.",
"uris": null
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"text": "To compare different types of overlapping strings as unsupervised feature selection systematically, we extend the work of Zhao and Kit (2008) into a unified representation of features. The representation accommodates both character position of a string and this string's likelihood ranked in logarithm. Formally, the ranking function for a string s with a score x counted by either CNG, AVS or TCB is defined as",
"uris": null
},
"TABREF0": {
"content": "<table/>",
"num": null,
"text": "and reduced N-gram (Ha et al., 2005) that are mentioned earlier, the article about a linear algorithm for Frequency of Substring Reduction (L\u00fc and Zhang, 2005) also falls into this category. Most of them focused on the computational complexity of algorithms. More general algorithms for frequent string extraction are usually suffix array (Manber and Myers, 1993) and PAT-tree (Chien, 1997).4.2 Unsupervised Word Segmentation MethodZhao and Kit (2008) have explored several unsupervised strategies with their unified goodness measurement of logarithm ranking, including Frequency of Substring with for incorporating unsupervised feature selections into supervised CRF learning, those methods usually filter out word-alike candidates by their own scoring mechanism directly.",
"html": null,
"type_str": "table"
},
"TABREF1": {
"content": "<table><tr><td>Feature</td><td>Function</td></tr><tr><td>C -1 , C 0 , C 1</td><td>Previous, current, or next token</td></tr><tr><td>C -1 C 0</td><td>Previous and current tokens</td></tr><tr><td>C 0 C 1</td><td>Current and next tokens</td></tr><tr><td>C -1 C 1</td><td>Previous and next tokens</td></tr></table>",
"num": null,
"text": "explains their abilities. Feature Template",
"html": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td>Input</td><td colspan=\"3\">Unsupervised Feature Selection</td><td/><td>Label</td></tr><tr><td/><td>1 char</td><td>2 char</td><td>3 char</td><td>4 char</td><td>5 char</td></tr></table>",
"num": null,
"text": "A Sample of the Unified Feature Representation for Overlapping String",
"html": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td>Input Original COS/TCB Feature</td><td>Label</td></tr><tr><td>\u53cd</td><td/></tr></table>",
"num": null,
"text": "A Sample of the Representation for Non-overlapping COS/TCB Strings",
"html": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>The corpora used for experiment are from SIGHAN CWS bakeoff 2005. It comes with four different standards including Academia Sinica (AS), City University of Hong Kong (CityU), Microsoft Research (MSR) and Peking University (PKU). 100 segmented are that words of number the segmented correctly thatare words of number the \u00d7 = P . (5) % 100 standard gold in the words of number the segmented correctly are that words of number the \u00d7 = R . (6) R P R P F + \u00d7 \u00d7 = 2 . (7) 6.2 % % 100 standard gold in the words OOV of number . the segmented correctly are that words OOV of number the \u00d7 = OOV R (8)</td></tr></table>",
"num": null,
"text": "Unsupervised Feature CollectionUnsupervised feature selections are collected according to pairs of corresponding training/test corpus. CNG and AVS are arranged with the help from SRILM(Stolcke, 2002). TCB strings and their ranks converted from TCF are calculated by YASA. To distinguish the ranked and overlapping feature of TCB/TCF from those of the original version of COS/TCB based features, the former are denoted as TCF to indicate the score source for ranking, and the abbreviation of the later remains as TCB.6.3 Evaluation MetricThe evaluation metric of CWS task is adopted from SIGHAN bakeoffs, including test Precision (P), test Recall (R), F1 measure score (F) and test Recall of Out-of-Vocabulary (R OOV ). Their formulae are list as follows.",
"html": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td colspan=\"3\">Corpus Feature C P</td><td>C R</td><td>F</td><td>R OOV C Roov</td></tr><tr><td>AS</td><td>6-tag</td><td colspan=\"3\">\u00b10.00125 \u00b10.00114 .955</td><td>.726 \u00b10.01164</td></tr><tr><td/><td>CNG</td><td colspan=\"3\">\u00b10.00124 \u00b10.00113 .955</td><td>.730 \u00b10.01159</td></tr><tr><td/><td>AVS</td><td colspan=\"3\">\u00b10.00120 \u00b10.00109 .958</td><td>.738 \u00b10.01147</td></tr><tr><td/><td>TCF</td><td colspan=\"3\">\u00b10.00126 \u00b10.00117 .953</td><td>.760 \u00b10.01114</td></tr><tr><td/><td>TCB</td><td colspan=\"3\">\u00b10.00123 \u00b10.00113 .956</td><td>.740 \u00b10.01145</td></tr><tr><td/><td colspan=\"4\">AVS+TCF \u00b10.00123 \u00b10.00113 .956</td><td>.751 \u00b10.01128</td></tr><tr><td/><td colspan=\"4\">AVS+TCB \u00b10.00120 \u00b10.00109 .958</td><td>.739 \u00b10.01147</td></tr><tr><td colspan=\"2\">CityU 6-tag</td><td colspan=\"3\">\u00b10.00219 \u00b10.00221 .948</td><td>.738 \u00b10.01536</td></tr><tr><td/><td>CNG</td><td colspan=\"3\">\u00b10.00207 \u00b10.00215 .953</td><td>.760 \u00b10.01493</td></tr><tr><td/><td>AVS</td><td colspan=\"3\">\u00b10.00199 \u00b10.00203 .957</td><td>.766 \u00b10.01480</td></tr><tr><td/><td>TCF</td><td colspan=\"3\">\u00b10.00208 \u00b10.00214 .953</td><td>.767 \u00b10.01478</td></tr><tr><td/><td>TCB</td><td colspan=\"3\">\u00b10.00209 \u00b10.00214 .953</td><td>.770 \u00b10.01470</td></tr><tr><td/><td colspan=\"4\">AVS+TCF \u00b10.00197 \u00b10.00200 .959</td><td>.777 \u00b10.01455</td></tr><tr><td/><td colspan=\"4\">AVS+TCB \u00b10.00207 \u00b10.00213 .953</td><td>.771 \u00b10.01469</td></tr><tr><td colspan=\"2\">MSR 6-tag</td><td colspan=\"3\">\u00b10.00100 \u00b10.00105 .971</td><td>.776 \u00b10.01405</td></tr><tr><td/><td>CNG</td><td colspan=\"3\">\u00b10.00100 \u00b10.00104 .972</td><td>.784 \u00b10.01387</td></tr><tr><td/><td>AVS</td><td colspan=\"3\">\u00b10.00099 \u00b10.00099 .973</td><td>.764 \u00b10.01432</td></tr><tr><td/><td>TCF</td><td colspan=\"3\">\u00b10.00099 \u00b10.00104 .972</td><td>.786 \u00b10.01384</td></tr><tr><td/><td>TCB</td><td colspan=\"3\">\u00b10.00099 \u00b10.00104 .972</td><td>.787 \u00b10.01381</td></tr><tr><td/><td colspan=\"4\">AVS+TCF \u00b10.00107 \u00b10.00114 .967</td><td>.793 \u00b10.01367</td></tr><tr><td/><td colspan=\"4\">AVS+TCB \u00b10.00101 \u00b10.00102 .972</td><td>.769 \u00b10.01422</td></tr><tr><td colspan=\"2\">PKU 6-tag</td><td colspan=\"3\">\u00b10.00139 \u00b10.00159 .939</td><td>.680 \u00b10.01140</td></tr><tr><td/><td>CNG</td><td colspan=\"3\">\u00b10.00139 \u00b10.00160 .938</td><td>.671 \u00b10.01149</td></tr><tr><td/><td>AVS</td><td colspan=\"3\">\u00b10.00132 \u00b10.00146 .947</td><td>.740 \u00b10.01072</td></tr><tr><td/><td>TCF</td><td colspan=\"3\">\u00b10.00138 \u00b10.00155 .941</td><td>.701 \u00b10.01119</td></tr><tr><td/><td>TCB</td><td colspan=\"3\">\u00b10.00139 \u00b10.00159 .939</td><td>.688 \u00b10.01133</td></tr><tr><td/><td colspan=\"4\">AVS+TCF \u00b10.00137 \u00b10.00155 .941</td><td>.709 \u00b10.01110</td></tr><tr><td/><td colspan=\"4\">AVS+TCB \u00b10.00132 \u00b10.00147 .947</td><td>.743 \u00b10.01067</td></tr></table>",
"num": null,
"text": "Performance Comparison After Post-processing",
"html": null,
"type_str": "table"
}
}
}
}