ACL-OCL / Base_JSON /prefixO /json /O10 /O10-1010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O10-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:06:46.576176Z"
},
"title": "Term Contributed Boundary Feature using Conditional Random Fields for Chinese Word Segmentation Task",
"authors": [
{
"first": "Tian-Jian",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing-Hua University",
"location": {}
},
"email": "tmjiang@iis.sinica.edu.tw"
},
{
"first": "Shih-Hung",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {}
},
"email": ""
},
{
"first": "\u2021",
"middle": [],
"last": "Cheng-Lung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {}
},
"email": ""
},
{
"first": "Wen-Lian",
"middle": [],
"last": "Hsu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing-Hua University",
"location": {}
},
"email": "hsu@iis.sinica.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes a novel feature for conditional random field (CRF) model in Chinese word segmentation system. The system uses a conditional random field as machine learning model with one simple feature called term contributed boundaries (TCB) in addition to the \"BIEO\" character-based label scheme. TCB can be extracted from unlabeled corpora automatically, and segmentation variations of different domains are expected to be reflected implicitly. The dataset used in this paper is the closed training task in CIPS-SIGHAN-2010 bakeoff, including simplified and traditional Chinese texts. The experiment result shows that TCB does improve \"BIEO\" tagging domain-independently about 1% of the F1 measure score.",
"pdf_parse": {
"paper_id": "O10-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes a novel feature for conditional random field (CRF) model in Chinese word segmentation system. The system uses a conditional random field as machine learning model with one simple feature called term contributed boundaries (TCB) in addition to the \"BIEO\" character-based label scheme. TCB can be extracted from unlabeled corpora automatically, and segmentation variations of different domains are expected to be reflected implicitly. The dataset used in this paper is the closed training task in CIPS-SIGHAN-2010 bakeoff, including simplified and traditional Chinese texts. The experiment result shows that TCB does improve \"BIEO\" tagging domain-independently about 1% of the F1 measure score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word segmentation is a trivial problem for most Western language, since there are clear delimiters (e.g. spaces) for individual words. However, for some Asia languages such as Chinese, Japanese and other language do not have word delimiters, word segmentation problem will be encountered if we want to do some further language processing, e.g. information retrieval, summarization and so on. Thus, the Chinese word segmentation could be viewed as a fundamental problem for natural language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Chinese word segmentation is still a challenging issue, and there is contest held in SIGHAN community [1] . The CIPS-SIGHAN-2010 bakeoff task of Chinese word segmentation is focused on cross-domain texts [2] . The design of data set is challenging particularly. The domain-specific training corpora remain unlabeled, and two of the test corpora keep domains unknown before releasing, therefore it is not easy to apply ordinary machine learning approaches, especially for the closed training evaluations.",
"cite_spans": [
{
"start": 102,
"end": 105,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 204,
"end": 207,
"text": "[2]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Traditional approach for Chinese word segmentation problem is adopted dictionary along with lots of rules to segment the unlabelled texts [3] . Recent years, the statistical machine learning models, such as Hidden Markov Model (HMM) [4] , Maximum Entropy Markov Model (MEMM) [5] and Conditional Random Field (CRF) [6] , show the moderate performance for sequential labeling problem, especially CRF achieves better outcome.",
"cite_spans": [
{
"start": 138,
"end": 141,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 233,
"end": 236,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 275,
"end": 278,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 314,
"end": 317,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper we propose a novel feature named term contributed boundary (TCB) for CRF model training. Since term contributed boundary extraction [10] is unsupervised, it is suitable for closed training task that any external resource or extra knowledge is not allowed.",
"cite_spans": [
{
"start": 146,
"end": 150,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Without proper knowledge, the closed task of word segmentation can be hard when out-of-vocabulary (OOV) sequences occurred, where TCB extracted from test data directly may help.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We also compare different character based label scheme \"BI\", \"BIO\" and \"BIEO\" for model training. \"B,\" \"I,\" \"E\" and \"O\" mean the beginning of word, the internal of word, the end of word and the single character word, respectively. The character-based \"BIO\" tagging of Conditional Random Field has been widely used in Chinese word segmentation recently [11, 12, 13] . From the experiments, \"BIEO\" labeling shows the better performance than \"BI\" and \"BIO\".",
"cite_spans": [
{
"start": 352,
"end": 356,
"text": "[11,",
"ref_id": "BIBREF10"
},
{
"start": 357,
"end": 360,
"text": "12,",
"ref_id": "BIBREF11"
},
{
"start": 361,
"end": 364,
"text": "13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The layout of this paper is as follows. We briefly introduce of CRF in Section 2. The novel feature term contributed boundary will be given in Section 3. Section 4 describes the data set and experimental results with error analysis. The conclusion is in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Conditional random fields (CRF) are undirected graphical models trained to maximize a conditional probability of random variables and Y , and the concept is well established for sequential labeling problem [6] . Given an input sequence (or observation sequence) and label sequence , a conditional probability of linear-chain CRF with parameters",
"cite_spans": [
{
"start": 206,
"end": 209,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "X T x x K 1 X = T y y K 1 Y = } ,..., { 1 n \u03bb \u03bb = \u039b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "\u239f \u23a0 \u239e \u239c \u239d \u239b = \u2211\u2211 = \u2212 T t k t t k k t X y y f 1 1 X ) , , , ( exp Z 1 X) | (Y P \u03bb \u03bb (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "where is the normalization constant that makes probability of all label sequences sum to one, is a feature function which is often binary valued, but can be real valued, x Given such a model as defined in Eq. 1, the most probable labeling sequence for an input sequence X is as follow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": ") | ( arg * X Y P y \u039b = (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "Eq. 2 can be efficiently calculated by dynamic programming using Viterbi algorithm. The more details about concepts of CRF and learning parameters could be refer to [7] . Figure 1 shows the CRF tagging, which is based on BIEO label training, in test phase when given the un-segmented input. ",
"cite_spans": [
{
"start": 165,
"end": 168,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 171,
"end": 179,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2."
},
{
"text": "The word boundary and the word frequency are the standard notions of frequency in corpus-based natural language processing, but they lack the correct information about the actual boundary and frequency of a phrase's occurrence. The distortion of phrase boundaries and frequencies was first observed in the Vodis Corpus when the bigram \"RAIL ENQUIRIES\" and trigram \"BRITISH RAIL ENQUIRIES\" were examined and reported by O'Boyle [8] . Both of them occur 73 times, which is a large number for such a small corpus.",
"cite_spans": [
{
"start": 427,
"end": 430,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Term Contributed Boundary",
"sec_num": "3."
},
{
"text": "\"ENQUIRIES\" follows \"RAIL\" with a very high probability when it is preceded by \"BRITISH.\" However, when \"RAIL\" is preceded by words other than \"BRITISH,\" \"ENQUIRIES\" does not occur, but words like \"TICKET\" or \"JOURNEY\" may. Thus, the bigram \"RAIL ENQUIRIES\" gives a misleading probability that \"RAIL\" is followed by \"ENQUIRIES\" irrespective of what precedes it. This problem happens not only with word-token corpora but also with corpora in which all the compounds are tagged as units since overlapping N-grams still appear, therefore corresponding solutions such as Zhang et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Contributed Boundary",
"sec_num": "3."
},
{
"text": "were proposed [9] .",
"cite_spans": [
{
"start": 14,
"end": 17,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Term Contributed Boundary",
"sec_num": "3."
},
{
"text": "We uses suffix array algorithm to calculate exact boundaries of phrase and their frequencies [10] , called term contributed boundaries (TCB) and term contributed frequencies (TCF), respectively, to analogize similarities and differences with the term frequencies (TF).",
"cite_spans": [
{
"start": 93,
"end": 97,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Term Contributed Boundary",
"sec_num": "3."
},
{
"text": "For example, in Vodis Corpus, the original TF of the term \"RAIL ENQUIRIES\" is 73.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Contributed Boundary",
"sec_num": "3."
},
{
"text": "However, the actual TCF of \"RAIL ENQUIRIES\" is 0, since all of the frequency values are contributed by the term \"BRITISH RAIL ENQUIRIES\". In this case, we can see that 'BRITISH RAIL ENQUIRIES' is really a more frequent term in the corpus, where \"RAIL ENQUIRIES\" is not. Hence the TCB of \"BRITISH RAIL ENQUIRIES\" is ready for CRF tagging as \"BRITISH/TB RAIL/TI ENQUIRIES/TI,\" for example, \"TB\" means beginning of the term contributed boundary and \"TI\" is the other place of term contributed boundary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Contributed Boundary",
"sec_num": "3."
},
{
"text": "In Chinese, similar problems occurred as Lin and Yu reported [14, 15] , consider the following Chinese text: \"\u81ea\u7136\u79d1\u5b78\u7684\u91cd\u8981\" (the importance of natural science) and",
"cite_spans": [
{
"start": 61,
"end": 65,
"text": "[14,",
"ref_id": "BIBREF13"
},
{
"start": 66,
"end": 69,
"text": "15]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Term Contributed Boundary",
"sec_num": "3."
},
{
"text": "\"\u81ea\u7136\u79d1\u5b78\u7684\u7814\u7a76\u662f\u552f\u4e00\u7684\u9014\u5f91\" (the research on natural science is the only way).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Contributed Boundary",
"sec_num": "3."
},
{
"text": "In the above text, there are many string patterns that appear more than once. Some patterns are listed as follows: \"\u81ea\u7136\u79d1\u5b78\" (natural science) and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Contributed Boundary",
"sec_num": "3."
},
{
"text": "\"\u81ea\u7136\u79d1\u5b78\u7684\" (of natural science).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Contributed Boundary",
"sec_num": "3."
},
{
"text": "They suggested that it is very unlikely that a random meaningless string will appear more than once in a corpus. The main idea behind our proposed method is that if a Chinese string pattern appears two or more times in the text, then it may be useful. However, not all the patterns which appear two or more times are useful. In the above text, the pattern \"\u7136\u79d1\" has no meaning. Therefore they proposed a method that is divided into two steps. The first step is to search through all the characters in the corpus to find patterns that appear more than once. Such patterns are gathered into a database called MayBe, which means these patterns may be \"Chinese Frequent Strings\" as they defined. The entries in MayBe consist of strings and their numbers of occurrences. The second step is to find the net frequency of occurrence for each entry in the above database. The net frequency of occurrence of an entry is the number of appearances which do not depend on other super-strings. For example, if the content of a text is \"\u81ea\u7136\u79d1\u5b78\uff0c\u81ea\u7136\u79d1\u5b78\" (natural science, natural science), then the net frequency of occurrence of \"\u81ea\u7136\u79d1\u5b78\" is 2, and the net frequency of occurrence of \"\u81ea\u7136\u79d1\" is zero since the string \"\u81ea\u7136\u79d1\" is brought about by the string \"\u81ea\u7136\u79d1\u5b78.\" They exclude the appearances of patterns which are brought about by others; hence their method is actually equivalent to the suffix array algorithm we apply for calculating TCB and TCF, and the annotated input string for CRF will be \"\u81ea/TB \u7136/TI \u79d1/TI \u5b78/TI\". The Figure 2 demonstrates one labeled phrase from the training data. ",
"cite_spans": [],
"ref_spans": [
{
"start": 1496,
"end": 1504,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Term Contributed Boundary",
"sec_num": "3."
},
{
"text": "The evaluation metric of word segmentation task is precision (P), recall (R), F1 measure (F) and OOV recall (OOV) which are list as follow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "\u2022 Precision = In this section, we evaluate the performance of term contributed boundary as a feature in CRF model training. The label scheme \"BI\" of ground truth has been treated as baseline for comparison with TCB features, which label scheme is also \"BI\". There are several different experiments that we have done which are showed in Table 2 and Table 3a and Table 2 indicates that F1 measure scores can be improved by TCB about 1%, domain-independently. Table 3a and Table 3b give a hint of the major contribution of performance has been done by TCB extracted from each test corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 356,
"text": "Table 2 and Table 3a",
"ref_id": null
},
{
"start": 361,
"end": 368,
"text": "Table 2",
"ref_id": null
},
{
"start": 457,
"end": 478,
"text": "Table 3a and Table 3b",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "In order to deal with English words, we apply post-processing to the segmented data. It simply recovers alphanumeric sequences according to their original segments in the training data. Table 4 shows the experiment result after post-processing. In this section we combine the TCB feature with \"BIEO\" to compare with \"BIO\". Table 5a and Table 5b show the experimental results. We find that our TCB feature is robustness, which would not affected by different label scheme. This meets our conjecture, and the experiments are fit our expectations. For the sake of consistency, we do extra experiments using label schemes either \"BIEO\" or \"BIO\" to label TCB features, and denote them as TE-TCB and TO-TCB. In these schemes, \"TB,\" \"TI,\" \"TE\" and \"TO\" are tags for the head of TCB, the middle of TCB, the tail of TCB, and the single character word of TCB, respectively. TE-TCB uses all tags but TO-TCB excludes the tag \"TE.\" Table 6a and Table 6a . Comparisons between TCB, TO-TCB and TE-TCB for Simplified Chinese test set For example, a phrase \"\u670d\u7528/simvastatin/ (/statins \u985e/\u7684/\u4e00/\u7a2e/),\" where '/' represents the word boundary, from the domain C of the test data cannot be either recognized by \"BIEO\" and/or TCB tagging approaches, or post-processed. This is the reason why Table 4 ",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 193,
"text": "Table 4",
"ref_id": null
},
{
"start": 323,
"end": 331,
"text": "Table 5a",
"ref_id": null
},
{
"start": 336,
"end": 344,
"text": "Table 5b",
"ref_id": null
},
{
"start": 919,
"end": 927,
"text": "Table 6a",
"ref_id": null
},
{
"start": 932,
"end": 940,
"text": "Table 6a",
"ref_id": null
},
{
"start": 1265,
"end": 1272,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "This paper introduces a simple CRF feature called term contributed boundaries (TCB) for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "Chinese word segmentation. The experiment result shows that it can improve the basic \"BIEO\" tagging scheme about 1% of the F1 measure score, domain-independently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "Further tagging scheme for non-Chinese characters are desired for recognizing some sophisticated gold standard of Chinese word segmentation that concatenates alphanumeric characters to Chinese characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
}
],
"back_matter": [
{
"text": "The most significant type of error in our results is unintentionally segmented English words.Rather than developing another set of tag for English alphabets, we applies post-processing to fix this problem under the restriction of closed training by using only alphanumeric character information. Table 4 compares F1 measure score of the Simplified Chinese experiment results before and after the post-processing.The major difference between gold standards of the Simplified Chinese corpora and the Traditional Chinese corpora is about non-Chinese characters. All of the alphanumeric and the punctuation sequences are separated from Chinese sequences in the Simplified Chinese corpora, but can be part of the Chinese word segments in the Traditional Chinese corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 303,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.3"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SIGHAN",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "SIGHAN, http://sighan.cs.uchicago.edu/",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Introduction to CKIP Chinese Word Segmentation System for the First International Chinese Word Segmentation Bakeoff",
"authors": [
{
"first": "Wei-Yun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL, Second SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "168--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ma, Wei-Yun and Keh-Jiann Chen, \"Introduction to CKIP Chinese Word Segmentation System for the First International Chinese Word Segmentation Bakeoff,\" in Proceedings of ACL, Second SIGHAN Workshop on Chinese Language Processing, pp. 168-171, 2003.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition",
"authors": [
{
"first": "Lawrence",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the IEEE",
"volume": "77",
"issue": "",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence R. Rabiner, \"A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,\" in Proceedings of the IEEE, Vol. 77, No. 2, pp. 257-286, 1989.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Maximum Entropy Markov Models for Information Extraction and Segmentation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCallum, A., Freitag, D. & Pereira, F., \"Maximum Entropy Markov Models for Information Extraction and Segmentation,\" in Proceedings of ICML, 2000.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "591--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira, \"Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data,\" in Proceedings of ICML, pp. 591-598, 2001.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Conditional Random Fields: An Introduction",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hanna",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanna M. Wallach, \"Conditional Random Fields: An Introduction,\" Technical Report MS-CIS-04-21, Department of Computer and Information Science, University of Pennsylvania, 2004.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A Study of an N-Gram Language Model for Speech Recognition",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boyle",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter O'Boyle, A Study of an N-Gram Language Model for Speech Recognition, PhD thesis, Queen's University Belfast, 1993",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Subword-Based Tagging by Conditional Random Fields for Chinese Word Segmentation",
"authors": [
{
"first": "Ruiqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Genichiro",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "193--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiqiang Zhang, Genichiro Kikui, and Eiichiro Sumita, \"Subword-Based Tagging by Conditional Random Fields for Chinese Word Segmentation,\" in Proceedings of the Human Language Technology Conference of the NAACL, pp. 193-196, New York, USA, 2006.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Compute the Term Contributed Frequency",
"authors": [
{
"first": "Cheng-Lung",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Wen-Lian",
"middle": [],
"last": "Hsu-Chun Yen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Eighth International Conference on Intelligent Systems Design and Applications",
"volume": "",
"issue": "",
"pages": "325--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng-Lung Sung, Hsu-Chun Yen, and Wen-Lian Hsu, \"Compute the Term Contributed Frequency,\" in Proceedings of the 2008 Eighth International Conference on Intelligent Systems Design and Applications, pp. 325-328, Washington, D.C., USA, 2008.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Chinese segmentation and new word detection using conditional random fields",
"authors": [
{
"first": "Fuchun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Coling-2004",
"volume": "",
"issue": "",
"pages": "562--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fuchun Peng and Andrew McCallum, \"Chinese segmentation and new word detection using conditional random fields,\" in Proceedings of Coling-2004, pp. 562-568, Geneva, Switzerland, 2004",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Conditional Random Field Word Segmenter for SIGHAN Bakeoff",
"authors": [
{
"first": "Huihsin",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Pichuan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Galen",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning, \"A Conditional Random Field Word Segmenter for SIGHAN Bakeoff 2005,\" in Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, Jeju, Korea, 2005.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Chinese Word Segmentation as LMR Tagging",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Libin",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Second SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue and Libin Shen, \"Chinese Word Segmentation as LMR Tagging,\" in Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, 2003.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Extracting Chinese Frequent Strings without a Dictionary from a Chinese Corpus and its Applications",
"authors": [
{
"first": "Yih-Jeng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ming-Shing",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Information Science and Engineering",
"volume": "17",
"issue": "",
"pages": "805--824",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yih-Jeng Lin and Ming-Shing Yu, \"Extracting Chinese Frequent Strings without a Dictionary from a Chinese Corpus and its Applications,\" Journal of Information Science and Engineering, Vol. 17, pp. 805-824, 2001.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Properties and Further Applications of Chinese Frequent Strings",
"authors": [
{
"first": "Yih-Jeng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ming-Shing",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics and Chinese Language Processing",
"volume": "9",
"issue": "",
"pages": "113--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yih-Jeng Lin and Ming-Shing Yu, \"The Properties and Further Applications of Chinese Frequent Strings,\" Computational Linguistics and Chinese Language Processing, Vol. 9, No. 1, pp. 113-128, February 2004.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "is a learned weight associated with feature . k The feature functions can measure any aspect of state transition , and the entire observation sequence , centered at the current position",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Illustration of CRF prediction",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Example of training data for CRF with \"BI\" and TCB4. Experiments4.1 DatasetThe corpora used in this paper are CIPS-SIGHAN-2010 bakeoff dataset which contain simplified Chinese (SC) and traditional Chinese (TC) texts. There are two types of training corpus for each language, including labeled training data (Chinese text which has been segmented into words) and unlabelled training data. The unlabeled corpus used in this bake-off task covers two domains: literature and computer science. The corpus for each domain is a pure text file which contains about 100,000 Chinese characters. The test corpus covers four domains, two of which are literature (denoted as Test-A) and computer science (denoted as Test-B), and the other two domains are medicine (denoted as Test-C) and finance (denoted as Test-D).",
"uris": null
},
"TABREF0": {
"type_str": "table",
"text": "Experiments of Comparison with BI, BIO and BIEOExperiments here evaluate the performance between three different label schemes \"BI\", \"BIO\" and \"BIEO\" for two types (SC and TC) in four domains (Test-A, Test-B, Test-C and Test-D).The result shows inTable 1. The scheme \"BIEO\" outperforms \"BI\" and \"BIEO\" on F1 measure, except at SC-Test-B. The domain B is computer science and its test data mingles many English words. In the end of section 4.2.2, we will deal with this problem using Comparison of BI, BIO and BIEO4.2.2 Term Contributed Boundary Experiments with BI as Baseline",
"num": null,
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">the</td><td colspan=\"4\">words that words of number of number the</td><td>segmented segmented correctly are that are</td><td>\u00d7</td><td>% 100</td></tr><tr><td>\u2022 Recall =</td><td>the</td><td colspan=\"4\">of words number of number the</td><td colspan=\"2\">reference segmented correctly in the are words that</td><td>\u00d7</td><td>% 100</td></tr><tr><td colspan=\"7\">\u2022 F1 measure = 2 \u00d7 P \u00d7 R / (P + R)</td></tr><tr><td colspan=\"3\">\u2022 OOV Recall =</td><td colspan=\"2\">the</td><td colspan=\"2\">of OOV number of number the</td><td>OOV words</td><td>reference segmented correctly in the are words that</td><td>\u00d7</td><td>% 100</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"text": "The configuration is about the trade-off between data sparseness and domain fitness. For the sake of OOV issue, TCBs from all the training and test corpora are included in the configuration of results. For potentially better consistency to different types of text, TCBs from the training corpora and/or test corpora are grouped by corresponding domains of test corpora.Table 2,Table 3a and Table 3bprovide the details, where the baseline is the character-based \"BI\" tagging, and others are \"BI\" with additional different TCB configurations: TCB all stands for the TCB extracted from all training data and all test data; TCB a , TCB b , TCB ta , TCB tb , TCB tc ,Table 3b. Traditional Chinese Domain-specific TCB vs. TCB all",
"num": null,
"html": null,
"content": "<table><tr><td/><td/><td/><td>F</td><td/><td>OOV</td></tr><tr><td colspan=\"2\">SC-Test-A TCB ta</td><td/><td>0.918</td><td/><td>0.690</td></tr><tr><td/><td>TCB a</td><td/><td>0.917</td><td/><td>0.679</td></tr><tr><td/><td colspan=\"2\">TCB ta + TCB a</td><td>0.917</td><td/><td>0.690</td></tr><tr><td/><td>TCB all</td><td/><td>0.919</td><td/><td>0.699</td></tr><tr><td colspan=\"2\">SC-Test-B TCB tb</td><td/><td>0.832</td><td/><td>0.465</td></tr><tr><td/><td>TCB b</td><td/><td>0.828</td><td/><td>0.453</td></tr><tr><td/><td colspan=\"2\">TCB tb + TCB b</td><td>0.830</td><td/><td>0.459</td></tr><tr><td/><td>TCB all</td><td/><td>0.836</td><td/><td>0.456</td></tr><tr><td colspan=\"2\">SC-Test-C TCB tc</td><td/><td>0.897</td><td/><td>0.618</td></tr><tr><td/><td>TCB all</td><td/><td>0.898</td><td/><td>0.699</td></tr><tr><td colspan=\"2\">SC-Test-D TCB td</td><td/><td>0.905</td><td/><td>0.557</td></tr><tr><td/><td>TCB all</td><td/><td>0.910</td><td/><td>0.562</td></tr><tr><td colspan=\"6\">TCB td represents TCB extracted from the training corpus A, B, and the test corpus A, B, C, D, Table 3a. Simplified Chinese Domain-specific TCB vs. TCB all</td></tr><tr><td>respectively.</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">TC-Test-A TCB ta</td><td>R</td><td>P 0.889 F</td><td>F</td><td>OOV OOV 0.706</td></tr><tr><td>SC-Test-A BI</td><td>TCB a</td><td colspan=\"4\">0.896 0.907 0.901 0.508 0.888 0.690</td></tr><tr><td/><td colspan=\"2\">TCB ta + TCB a</td><td>0.889</td><td/><td>0.710</td></tr><tr><td/><td>TCB all</td><td/><td>0.881</td><td/><td>0.670</td></tr><tr><td colspan=\"2\">TC-Test-B TCB tb</td><td/><td>0.911</td><td/><td>0.636</td></tr><tr><td/><td>TCB b</td><td/><td>0.921</td><td/><td>0.696</td></tr><tr><td/><td colspan=\"2\">TCB tb + TCB b</td><td>0.912</td><td/><td>0.641</td></tr><tr><td/><td>TCB all</td><td/><td>0.915</td><td/><td>0.663</td></tr><tr><td colspan=\"2\">TC-Test-C TCB tc</td><td/><td>0.918</td><td/><td>0.705</td></tr><tr><td/><td>TCB all</td><td/><td>0.908</td><td/><td>0.668</td></tr><tr><td colspan=\"2\">TC-Test-D TCB td</td><td/><td>0.927</td><td/><td>0.717</td></tr><tr><td/><td>TCB all</td><td/><td>0.925</td><td/><td>0.732</td></tr></table>"
},
"TABREF2": {
"type_str": "table",
"text": "The performance has been improved, especially on the domain B of computer science, since its data consists of a lot of technical terms in English.",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">F1 measure score</td></tr><tr><td/><td>Before</td><td>After</td></tr><tr><td>SC-A BIO</td><td>0.911</td><td>0.918</td></tr><tr><td>BI</td><td>0.901</td><td>0.908</td></tr><tr><td>TCB ta</td><td>0.918</td><td>0.920</td></tr><tr><td>TCB ta + TCB a</td><td>0.917</td><td>0.920</td></tr><tr><td>TCB all</td><td>0.919</td><td>0.921</td></tr><tr><td>SC-B BIO</td><td>0.831</td><td>0.920</td></tr><tr><td>BI</td><td>0.805</td><td>0.910</td></tr><tr><td>TCB tb</td><td>0.832</td><td>0.917</td></tr><tr><td>TCB tb + TCB b</td><td>0.830</td><td>0.916</td></tr><tr><td>TCB all</td><td>0.836</td><td>0.916</td></tr><tr><td>SC-C BIO</td><td>0.897</td><td>0.904</td></tr><tr><td>BI</td><td>0.887</td><td>0.896</td></tr><tr><td>TCB tc</td><td>0.897</td><td>0.901</td></tr><tr><td>TCB all</td><td>0.898</td><td>0.902</td></tr><tr><td>SC-D BIO</td><td>0.901</td><td>0.919</td></tr><tr><td>BI</td><td>0.890</td><td>0.908</td></tr><tr><td>TCB td</td><td>0.905</td><td>0.915</td></tr><tr><td>TCB all</td><td>0.908</td><td>0.918</td></tr><tr><td colspan=\"3\">Table 4. F1 scores before and after the English problem fixed</td></tr><tr><td colspan=\"3\">4.2.3 Term Contributed Boundary Experiments with BIO and BIEO</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"text": "show the comparisons between the original TCB, TO-TCB and TE-TCB. The result suggests that TO-TCB and TE-TCB may not have stable and significant improvements to the original TCB scheme that consists of only \"TB\" and \"TI.\" We suspect that it is because single character words of TCB and the tail character of TCB sometimes conflict with the word boundaries of gold standard, after all the concept of TCB is from suffix pattern, not from linguistic design.",
"num": null,
"html": null,
"content": "<table><tr><td/><td>F</td><td>OOV</td></tr><tr><td>SC-Test-A BIEO, TCB a</td><td>0.931</td><td>0.720</td></tr><tr><td colspan=\"2\">BIEO, TO-TCB a 0.929</td><td>0.719</td></tr><tr><td>BIEO, TE-TCB a</td><td>0.932</td><td>0.719</td></tr><tr><td>BIEO, TCB all</td><td>0.931</td><td>0.723</td></tr><tr><td colspan=\"2\">BIEO, TO-TCB all 0.929</td><td>0.719</td></tr><tr><td colspan=\"2\">BIEO, TE-TCB all 0.931</td><td>0.719</td></tr><tr><td>SC-Test-B BIEO, TCB b</td><td>0.840</td><td>0.473</td></tr><tr><td>BIEO, TO-TCB b</td><td>0.841</td><td>0.473</td></tr><tr><td>BIEO, TE-TCB b</td><td>0.838</td><td>0.475</td></tr><tr><td>BIEO, TCB all</td><td>0.833</td><td>0.451</td></tr><tr><td colspan=\"2\">BIEO, TO-TCB all 0.835</td><td>0.443</td></tr><tr><td colspan=\"2\">BIEO, TE-TCB all 0.838</td><td>0.455</td></tr><tr><td>SC-Test-C BIEO, TCB c</td><td>0.911</td><td>0.651</td></tr><tr><td>BIEO, TO-TCB c</td><td>0.910</td><td>0.655</td></tr><tr><td>BIEO, TE-TCB c</td><td>0.913</td><td>0.665</td></tr><tr><td>BIEO, TCB all</td><td>0.912</td><td>0.636</td></tr><tr><td colspan=\"2\">BIEO, TO-TCB all 0.906</td><td>0.599</td></tr><tr><td colspan=\"2\">BIEO, TE-TCB all 0.909</td><td>0.625</td></tr><tr><td>SC-Test-D BIEO, TCB d</td><td>0.923</td><td>0.631</td></tr><tr><td>BIEO, TO-TCB d</td><td>0.916</td><td>0.605</td></tr><tr><td>BIEO, TE-TCB d</td><td>0.925</td><td>0.643</td></tr><tr><td>BIEO, TCB all</td><td>0.923</td><td>0.613</td></tr><tr><td colspan=\"2\">BIEO, TO-TCB all 0.921</td><td>0.592</td></tr><tr><td colspan=\"2\">BIEO, TE-TCB all 0.923</td><td>0.612</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"text": "does not come along with Traditional Chinese experiment results. Some errors are due to inconsistencies in the gold standard of non-Chinese character, For example, in the Traditional Chinese corpora, some percentage digits are separated from their percentage signs, meanwhile those percentage signs are connected to parentheses right next to them.",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}