| { |
| "paper_id": "O16-2001", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:05:00.476565Z" |
| }, |
| "title": "A Segmentation Matrix Method for Chinese Segmentation Ambiguity Analysis", |
| "authors": [ |
| { |
| "first": "Yanping", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Guizhou University \uf02b Xi'an Jiaotong University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Qinghua", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Guizhou University \uf02b Xi'an Jiaotong University", |
| "location": {} |
| }, |
| "email": "qhzheng@mail.xjtu.edu.cn" |
| }, |
| { |
| "first": "\uf02b", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Guizhou University \uf02b Xi'an Jiaotong University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Deli", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Guizhou University \uf02b Xi'an Jiaotong University", |
| "location": {} |
| }, |
| "email": "delizheng.2009@gmail.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Chinese Segmentation Ambiguity (CSA) is a fundamental problem confronted when processing Chinese language, where a sentence can generate more than one segmentation paths. Two techniques are commonly used to identify CSA: Omni-segmentation and Bi-directional Maximum Matching (BiMM). Due to the high computational complexity, Omni-segmentation is difficult to be applied for big data. BiMM is easier to be implemented and has a higher speed. However, recall of BiMM is much lower. In this paper, a Segmentation Matrix (SM) method is presented, which encodes each sentence as a matrix, then maps string operation into set operations. To identify CSA, instead of scanning a whole sentence, only specific areas of the matrix are checked. SM has a computational complexity close to BiMM with recall the same as Omni-segmentation. In addition to CSA identification, SM also supports lexicon-based Chinese word segmentation. In our experiments, based on SM, several issues about CSA are explored. The result shows that SM is useful for CSA analysis.", |
| "pdf_parse": { |
| "paper_id": "O16-2001", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Chinese Segmentation Ambiguity (CSA) is a fundamental problem confronted when processing Chinese language, where a sentence can generate more than one segmentation paths. Two techniques are commonly used to identify CSA: Omni-segmentation and Bi-directional Maximum Matching (BiMM). Due to the high computational complexity, Omni-segmentation is difficult to be applied for big data. BiMM is easier to be implemented and has a higher speed. However, recall of BiMM is much lower. In this paper, a Segmentation Matrix (SM) method is presented, which encodes each sentence as a matrix, then maps string operation into set operations. To identify CSA, instead of scanning a whole sentence, only specific areas of the matrix are checked. SM has a computational complexity close to BiMM with recall the same as Omni-segmentation. In addition to CSA identification, SM also supports lexicon-based Chinese word segmentation. In our experiments, based on SM, several issues about CSA are explored. The result shows that SM is useful for CSA analysis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Chinese characters are originated from hieroglyphic and written next to each other without delimiter in between. The lack of orthographic words makes Chinese word segmentation difficult. It is often that a Chinese sentence can be parsed into several segmentation paths, which results in the Chinese Segmentation Ambiguity (CSA) problem. It can be roughly classified into two categories: Overlapping Ambiguity (OA) and Combinational Ambiguity (CA) 1 (Liang, 1984; . For the OA problem, a sentence contains at least two 2 Yanping Chen et al. overlapped words. For example, \"\u6e29\u67d4\u548c\" contains two overlapped words: \"\u6e29\u67d4\" (Gentle) and \"\u67d4\u548c\" (Soft). The character \"\u67d4\" can be assembled with either \"\u6e29\" and \"\u548c\". Only one is suitable in a given context. In Chinese, every character can be either a morpheme or a word (Li, 2011) . Then, given a word containing more than one characters, whether it is appropriate to segment it will lead to the CA problem. For example, \"\u6e29\u67d4\" (Gentle) can be further segmented into \"\u6e29/\u67d4\" (Warm/ Soft).", |
| "cite_spans": [ |
| { |
| "start": 449, |
| "end": 462, |
| "text": "(Liang, 1984;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 528, |
| "end": 539, |
| "text": "Chen et al.", |
| "ref_id": null |
| }, |
| { |
| "start": 803, |
| "end": 813, |
| "text": "(Li, 2011)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In Chinese, the OA is prevalent. For example, in the Penn Chinese Treebank corpus, there are 39% sentences are identified with this ambiguity. Therefore, the OA is widely studied in this field. When the CA is under consideration, the problem is more serious. For example, in the lexicon of our experiments, 75.25% words have the CA problem, even using a loose definition (See Definition 4 of Section 3). Furthermore, CA and OA are not independent. They often co-occur within a sentence, which worsens the performance of Chinese word segmentation (Chen et al., 2012) . The problem is deteriorated by the fact that Chinese has a large number of characters and words 2 .", |
| "cite_spans": [ |
| { |
| "start": 546, |
| "end": 565, |
| "text": "(Chen et al., 2012)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "To identify CSA, two techniques are commonly used: Omni-segmentation and BiMM. Omni-segmentation tries to traverse every segmentation path in a sentence. All ambiguities can be identified. The problem of Omni-segmentation is that it has the highest computational and space complexities. For example, a sentence \"\u6c5f\u6cfd\u6c11\u5728\u5317\u4eac\u4eba\u6c11\u5927\u4f1a\u5802\u4f1a\u89c1\u53c2\u52a0\u5168\u56fd \u6cd5\u9662\u5de5\u4f5c\u4f1a\u8bae\u548c\u5168\u56fd\u6cd5\u9662\u7cfb\u7edf\u6253\u51fb\u7ecf\u6d4e\u72af\u7f6a\u5148\u8fdb\u96c6\u4f53\u8868\u5f70\u5927\u4f1a\u4ee3\u8868\u65f6\u8981\u6c42\u5927\u5bb6\u8981\u5145\u5206\u8ba4 \u8bc6\u6253\u51fb\u7ecf\u6d4e\u72af\u7f6a\u5de5\u4f5c\u7684\u8270\u5de8\u6027\u548c\u957f\u671f\u6027\" (Meanings of this sentence can be ignored 3 ) may generate 3,764,387,840 segmentation paths (Wang et al., 2004) . When a large-scale dataset is confronted, this method is difficult to be applied, unless additional information is available, e.g., statistic information (Wang et al., 2009) . BiMM is frequently adopted for identifying CSA (Li et al., 2003) . It is easier to be implemented and has a higher speed. The disadvantage of BiMM is that overlapping ambiguity strings with even length (counted by characters) cannot be identified 4 Chang et al., 2008) . Furthermore, BiMM only identifies MOAS 5 . Without addition information, it cannot find individual Overlapping Ambiguity String (OAS) in a sentence. Therefore, many studies are mainly focused on MOAS 2 Currently, more than 13000 characters and 69000 words are used by native Chinese people (http://www.cp.com.cn/). 3 In Beijing's Great Hall, when meeting representatives attending the national court and the national court system against economic crime on behalf of advanced collective awards ceremony, Zemin Jiang asks everyone to fully understand that the work of combating economic crime is arduous and long-term. 4 Section 2.2 gives an example of this combination pattern. 5 A MOAS is an ambiguity string that no overlapping ambiguity is detected on both sides of the string.", |
| "cite_spans": [ |
| { |
| "start": 480, |
| "end": 499, |
| "text": "(Wang et al., 2004)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 656, |
| "end": 675, |
| "text": "(Wang et al., 2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 725, |
| "end": 742, |
| "text": "(Li et al., 2003)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 927, |
| "end": 946, |
| "text": "Chang et al., 2008)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1264, |
| "end": 1265, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 1566, |
| "end": 1567, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 1626, |
| "end": 1627, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "A Segmentation Matrix Method for Chinese Segmentation Ambiguity Analysis 3 (Sun et al., 1999; Li et al., 2008; Qiao et al., 2008; Li, 2011) .", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 93, |
| "text": "(Sun et al., 1999;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 94, |
| "end": 110, |
| "text": "Li et al., 2008;", |
| "ref_id": null |
| }, |
| { |
| "start": 111, |
| "end": 129, |
| "text": "Qiao et al., 2008;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 130, |
| "end": 139, |
| "text": "Li, 2011)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formal definition can be seen in Definition 7.", |
| "sec_num": null |
| }, |
| { |
| "text": "In this article, a Segmentation Matrix (SM) method is presented. It encodes lexical information of a sentence as a matrix. Then, set theory is developed to analyze CSA. With the complexity closing to BiMM, SM can identify every type of CSA the same as Omni-segmentation. In addition to CSA identification, SM is also available for Chinese word segmentation. Several lexicon-based methods are fully supported. Making use of the SM method, in our experiments, characteristics of CSA are studied, which show informative conclusions of CSA.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formal definition can be seen in Definition 7.", |
| "sec_num": null |
| }, |
| { |
| "text": "The contribution of this paper includes, 1. A SM approach is proposed, which encodes lexical information in a structured form. SM can make better use of sentence structure information for CSA analysis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formal definition can be seen in Definition 7.", |
| "sec_num": null |
| }, |
| { |
| "text": "2. Formal definitions about CSA are defined under the framework of set theory, which maps string operations into set operations reducing the computational complexity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formal definition can be seen in Definition 7.", |
| "sec_num": null |
| }, |
| { |
| "text": "3. Based on SM, characteristics of CSA are investigated. And several issues about CSA are studies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formal definition can be seen in Definition 7.", |
| "sec_num": null |
| }, |
| { |
| "text": "The remainder of this paper is structured as follows: Section 2 reviews previous works. Section 3 gives formal definitions and notations about CSA. The SM method is discussed in Section 4. In Section 5, several issues about CSA are analyzed. The conclusion is given in Section 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formal definition can be seen in Definition 7.", |
| "sec_num": null |
| }, |
| { |
| "text": "Given a sentence, CSA is identified when more than one segmentation paths are found. Therefore, CSA identification and Chinese word segmentation are two aspects of a problem. In this section, we first give a simple overview about Chinese word segmentation methods. Then CSA identification approaches are discussed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Chinese word segmentation methods can be roughly classified into three categories: lexicon-based methods, statistical-based methods and hybrid methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chinese Word Segmentation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Lexicon-based methods are easy to be implemented and has a high speed. They are still used for Chinese word segmentation in many applications. Maximum Matching (MM) is the most popular lexicon-based method. It is a greedy algorithm implemented by scanning a sentence from one side to another and greedily matching the longest lexicon entry until the end of a sentence is reached. There are two MM methods: Forward Maximum Matching (FMM) and Backward Maximum Matching (BMM). FMM scans from left to right, and BMM starts from the opposite direction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chinese Word Segmentation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In statistical-based methods, word segmentation is the way to find a segmentation path which has the maximum probability. They take advantages of mathematical models, such as Naive Bayesian (Li et al., 2003) , Hidden Markov Model (Zhang et al., 2003) , Conditional Random Fields (Peng et al., 2004; Tseng et al., 2005) , Maximum Entropy (Xue, 2003) , Graph-based Model (Zeng et al., 2013) and Compression-based algorithm (Teahan et al., 2000) . There are research combine generative model with discriminative model, e.g., . According the type of segmentation units, a sentence can be treated as a character sequence (character-based model) or a word sequence (word-based model). Studies were shown that the character-based approach are more successful (Xue, 2003; Zhao et al., 2010) .", |
| "cite_spans": [ |
| { |
| "start": 190, |
| "end": 207, |
| "text": "(Li et al., 2003)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 230, |
| "end": 250, |
| "text": "(Zhang et al., 2003)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 279, |
| "end": 298, |
| "text": "(Peng et al., 2004;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 299, |
| "end": 318, |
| "text": "Tseng et al., 2005)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 337, |
| "end": 348, |
| "text": "(Xue, 2003)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 369, |
| "end": 388, |
| "text": "(Zeng et al., 2013)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 421, |
| "end": 442, |
| "text": "(Teahan et al., 2000)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 752, |
| "end": 763, |
| "text": "(Xue, 2003;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 764, |
| "end": 782, |
| "text": "Zhao et al., 2010)", |
| "ref_id": "BIBREF51" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chinese Word Segmentation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Hybrid methods integrate statistical-based and the lexicon-based methods (Gao et al., 2005) or utilize a joint model combining word segmentation with POS tagging or parsing (Wang et al., 2013; Sun, 2011; Li et al., 2011; Hatori et al., 2012) . Hybrid methods try to use syntactic, semantic analyses or external knowledge to improve the performance (Wu et al., 1998; Huang et al., 2007; Zeng et al., 2011; Wu et al., 2011; Christiansen et al., 2011) .", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 91, |
| "text": "(Gao et al., 2005)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 173, |
| "end": 192, |
| "text": "(Wang et al., 2013;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 193, |
| "end": 203, |
| "text": "Sun, 2011;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 204, |
| "end": 220, |
| "text": "Li et al., 2011;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 221, |
| "end": 241, |
| "text": "Hatori et al., 2012)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 348, |
| "end": 365, |
| "text": "(Wu et al., 1998;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 366, |
| "end": 385, |
| "text": "Huang et al., 2007;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 386, |
| "end": 404, |
| "text": "Zeng et al., 2011;", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 405, |
| "end": 421, |
| "text": "Wu et al., 2011;", |
| "ref_id": null |
| }, |
| { |
| "start": 422, |
| "end": 448, |
| "text": "Christiansen et al., 2011)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chinese Word Segmentation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In statistical-based and hybrid methods, the tasks of word segmentation and CSA identification are combined into a unified framework. Therefore, the CSA problem is not obviously considered. As for the lexicon-based methods, greedy algorithms are used, which results in the CSA problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chinese Word Segmentation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In order to discuss related work, in this section, we use the same example provided by Wang et al. (2004) : \"\u5f53\u539f\u5b50\u7ed3\u5408\u6210\u5206\u5b50\u65f6\" 6 . The right segmentation path should be \"\u5f53/\u539f\u5b50/\u7ed3\u5408 /\u6210/\u5206\u5b50/\u65f6\" (when/ atoms/ combine/ molecules/ the time). The overlapping ambiguous string is \"\u7ed3\u5408\u6210\u5206\u5b50\u65f6\". It can also be segmented into \"\u7ed3\u5408/\u6210\u5206/\u5b50\u65f6\" (combine/ ingredient/ midnight).", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 105, |
| "text": "Wang et al. (2004)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Chinese Segmentation Ambiguity Identification", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The central issue of CSA identification is that trying to find all possible segmentation paths. BiMM is the most popular method for CSA identification (Gao et al., 2011; Yao et al., 2012) . It is implemented by running FMM and BMM respectively. The two outputs of FMM and BMM are compared. Different outputs imply the existence of segmentation ambiguity. The main disadvantage of BiMM is that overlapping ambiguity strings with even length cannot be identified. In this situation, both of FMM and BMM have the same output. As shown in Figure 1 , two outputs of BiMM are the same.", |
| "cite_spans": [ |
| { |
| "start": 151, |
| "end": 169, |
| "text": "(Gao et al., 2011;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 170, |
| "end": 187, |
| "text": "Yao et al., 2012)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 535, |
| "end": 543, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Chinese Segmentation Ambiguity Identification", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Omni-segmentation tries to find every segmentation path in a sentence, which can be illustrated by a tree structure as shown in Figure 2 . The root represents the start of a sentence. Nodes represent words of a sentence. Each branch implies a possible segmentation path.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 128, |
| "end": 136, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 1. BiMM Method", |
| "sec_num": null |
| }, |
| { |
| "text": "The Directed Acyclic Graph (DAG) method was proposed by Zhang et al. (2002) as Figure 3 shows. Given a sentence represented as ( denotes Chinese characters), vertices are used to separate it. Then, is generated. and are the start and the end vertexes. If a substring ( ) matches a lexicon entry, then a directed edge is added.", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 75, |
| "text": "Zhang et al. (2002)", |
| "ref_id": "BIBREF49" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 79, |
| "end": 87, |
| "text": "Figure 3", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 2. Omni-segmentation Method", |
| "sec_num": null |
| }, |
| { |
| "text": "The DAG is used to collect possible segmentation paths. According to Zhang et al. (2002) , if the 8-shortest paths are collected, this method can receive performance about 99.90% in recall to find correct segmentation paths. The word lattice method was proposed for Chinese word segmentation (Jiang et al., 2008) . It is built by merging the output of outer segmenters. This method is mainly used as a re-ranking strategy. As shown in Figure 5 , lattice nodes denote positions between characters. Edges covering subsequences of sentence denote words (Wang et al., 2013) .", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 88, |
| "text": "Zhang et al. (2002)", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 292, |
| "end": 312, |
| "text": "(Jiang et al., 2008)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 550, |
| "end": 569, |
| "text": "(Wang et al., 2013)", |
| "ref_id": "BIBREF37" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 435, |
| "end": 443, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 2. Omni-segmentation Method", |
| "sec_num": null |
| }, |
| { |
| "text": "There are other approaches proposed for CSA analyses, such as the overlapping ambiguity elimination model (Yao et al., 2012) , the word-by-word scanning based maximum matching algorithm (Zhang et al., 2006; Sun et al., 2009) , the method based on type theory (Gao et al., 2009) and the coupling degree of double characters method (Wand et al., 2007) . These methods mainly focus on the overlapping ambiguity. Problems of combinational ambiguity and the difference between MOAS and OAS are rarely studied. In this paper, we propose a SM method. The detail of SM is discussed in Section 4. In the following, we first introduce definitions and notations used in this paper.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 124, |
| "text": "(Yao et al., 2012)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 186, |
| "end": 206, |
| "text": "(Zhang et al., 2006;", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 207, |
| "end": 224, |
| "text": "Sun et al., 2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 259, |
| "end": 277, |
| "text": "(Gao et al., 2009)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 330, |
| "end": 349, |
| "text": "(Wand et al., 2007)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 5. Word Lattice", |
| "sec_num": null |
| }, |
| { |
| "text": "to be a segmentation matrix, is an element of with coordinates Row and Column . A sentence is referred as . The length of is supposed to be . We define two sets as: 7 where are the natural numbers. is a partial order set. denotes an employed lexicon. A closed interval denotes subset of . represents substring of starting from the th character to the th character. By means of set operations, sentence operations are defined as follows", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Let", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that all sentence operations are implemented within indexes belonging to the same sentence . Otherwise, these operations are nonsense.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "7", |
| "sec_num": null |
| }, |
| { |
| "text": "Based on sentence operations, formal definitions about segmentation path, combinational ambiguity, overlapping ambiguity, etc. are defined as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "7", |
| "sec_num": null |
| }, |
| { |
| "text": "to be a partition of , then is a Segmentation Path of , referred to as or .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1: Let", |
| "sec_num": null |
| }, |
| { |
| "text": "In other words, a segmentation path is a partition of . For example, let =\"\u6e29\u67d4\u548c\u5584 \u89e3\u4eba\u610f\" (gentle and understanding), because is a partition of , then is a segmentation path (or , which denotes \"\u6e29\u67d4/\u548c /\u5584\u89e3\u4eba\u610f\" (gentle/ and/ understanding). In this paper, (square bracket) represents substring of , and (parentheses) represents a segmentation path.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1: Let", |
| "sec_num": null |
| }, |
| { |
| "text": "Definition 2: Let to be a segmentation path, if such that . Then, the segmentation path is in accord with .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1: Let", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we assume that all mentioned segmentation paths satisfy this constraint.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1: Let", |
| "sec_num": null |
| }, |
| { |
| "text": "Definition 3: Let . If is a partition of , then has the Combinational Ambiguity (CA), and is a Combinational Ambiguity String (CAS).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1: Let", |
| "sec_num": null |
| }, |
| { |
| "text": "For example, let =\" \u6e29 \u67d4 \u548c \u5584 \u89e3 \u4eba \u610f \", =\" \u5584 \u89e3 \u4eba \u610f \", and is a partition of . Because =\"\u5584\", , =\" \u89e3 \", , =\" \u4eba \u610f \", , therefore, has combinational ambiguity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1: Let", |
| "sec_num": null |
| }, |
| { |
| "text": "is a CAS, referred to as .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1: Let", |
| "sec_num": null |
| }, |
| { |
| "text": "In Chinese, almost every character can function either as a word or as a morpheme (Chen et al., 1998) . If Definition 3 is adopted, then words exceeding two characters will lead to the combinational ambiguity. Because disambiguation for combinational ambiguity is difficult (Luo et al., 2002) . Therefore, to reduce the combinational ambiguity problem, the following combinational ambiguity definition in a narrow sense is proposed.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 101, |
| "text": "(Chen et al., 1998)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 274, |
| "end": 292, |
| "text": "(Luo et al., 2002)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1: Let", |
| "sec_num": null |
| }, |
| { |
| "text": "Let , if such that , then has the Narrow Sense Combinational Ambiguity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 4:", |
| "sec_num": null |
| }, |
| { |
| "text": "This definition is the same as that proposed by Liang (1984) . In this paper, the narrow sense combinational ambiguity is used as our default definition, also referred as combinational ambiguity except where noted.", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 60, |
| "text": "Liang (1984)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 4:", |
| "sec_num": null |
| }, |
| { |
| "text": ", if and , then has the Overlapping Ambiguity (OA). is an Overlapping Ambiguity String (OAS).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 5: Let", |
| "sec_num": null |
| }, |
| { |
| "text": "and have overlapping ambiguity, then is an Overlapping Chain String (OCS) and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "If", |
| "sec_num": null |
| }, |
| { |
| "text": "is Overlapping Chain Length (OCL), where is the cardinality of .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "If", |
| "sec_num": null |
| }, |
| { |
| "text": "For example, let =\" \u6e29 \u67d4 \u548c \u5584 \u89e3 \u4eba \u610f \", =\" \u6e29 \u67d4 \u548c \", and . Because , so that has overlapping ambiguity. is an OAS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "If", |
| "sec_num": null |
| }, |
| { |
| "text": "is an OCS, and OCL of is 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "If", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, CAS and OAS are collectively referred as Ambiguity String (AS). Prefixes OA, CA and OC are used to indicate properties of (Overlapping Ambiguity, Combinational Ambiguity and Overlapping Chain). For example, means that is an OAS. Given a set of OAS (referred as ) in a sentence, the set of MOAS (referred to as ) is computed by merging every OAS that are addable. It is computed as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "If", |
| "sec_num": null |
| }, |
| { |
| "text": "(2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "If", |
| "sec_num": null |
| }, |
| { |
| "text": "appears on both sides of this equation, it is an iterative process. It will be discussed in Section 4.2.3 that is easy to be implemented, because all elements in and are ordered.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Because", |
| "sec_num": null |
| }, |
| { |
| "text": "In ancient Chinese, there are no punctuations between sentences. Currently, symbols such as the period (\"\u3002\"), question mark (\"\uff1f\"), exclamatory mark (\"\uff01\"), semicolon (\"\uff1b\") or comma (\"\uff0c\") are widely used as sentence boundaries. The problem is that using of the comma is ambiguous. It may function as a sentence boundary or a separation of clauses. Therefore, disambiguation of sentence boundary is required (Xue et al., 2011) . Because lots of language characteristics cannot exist while crossing punctuation, e.g., segmentation ambiguity, named entity, etc. (Chen et al., 2015b) , Sentence Fragment is used to denote substring of a sentence divided by punctuations.", |
| "cite_spans": [ |
| { |
| "start": 405, |
| "end": 423, |
| "text": "(Xue et al., 2011)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 557, |
| "end": 577, |
| "text": "(Chen et al., 2015b)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Because", |
| "sec_num": null |
| }, |
| { |
| "text": "Definition 8: Sentence fragment is a substring of a sentence that contains no punctuation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Because", |
| "sec_num": null |
| }, |
| { |
| "text": "This notion is useful for Chinese NLP, e.g., Zhang et al. (2013) , especially for unsupervised machine learning method, e.g., Zhang et al. (2003) , Li et al. (2009) . Figure 6 gives an example of SM. Coordinates of SM represent characters of . The element data type of SM is Boolean.", |
| "cite_spans": [ |
| { |
| "start": 45, |
| "end": 64, |
| "text": "Zhang et al. (2013)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 126, |
| "end": 145, |
| "text": "Zhang et al. (2003)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 148, |
| "end": 164, |
| "text": "Li et al. (2009)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 167, |
| "end": 175, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Because", |
| "sec_num": null |
| }, |
| { |
| "text": "means that word . To build SM of a sentence, by scanning a given sentence, when a lexicon entry is matched, the corresponding element is set to 1, otherwise to 0. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation Matrix", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Following the definition of the Combinational Ambiguity in a Narrow Sense (See Definition 4), Solution 1 is proposed to identify combinational ambiguity strings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ambiguity Identification on SM", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For , if , then is a CAS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution 1:", |
| "sec_num": null |
| }, |
| { |
| "text": "For example, in Figure 6 , , , 2. , length of the given sentence;", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 16, |
| "end": 24, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Solution 1:", |
| "sec_num": null |
| }, |
| { |
| "text": "3. , length of the longest lexicon entry;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution 1:", |
| "sec_num": null |
| }, |
| { |
| "text": "Combinational ambiguity strings are stored in the vector ;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Output:", |
| "sec_num": null |
| }, |
| { |
| "text": "Method:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Output:", |
| "sec_num": null |
| }, |
| { |
| "text": "(0) for(int ; ; ){ (1) for(int ; ; ){ (2) if(SegMatrix[i][j]){ \\\\ (3) for(int ; ; ){ (4) if(SegMatrix[i][j'] && SegMatrix[ ][j]){ (5) .push_back(i, j', j); (6) } } } } }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Output:", |
| "sec_num": null |
| }, |
| { |
| "text": "As Algorithm 1 shows, three loops are used. Except the outer loop has an index , another two nested loops have cycle indexes less than . Therefore, the complexity of combinational ambiguity identification is . The function in Row (1) is adopted to decrease the search space. If a \"break\" clause is in the pace after Row (5), the algorithm will return when the combinational ambiguity is identified. Otherwise, as it shows, every combinational ambiguity in the sentence is collected.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Output:", |
| "sec_num": null |
| }, |
| { |
| "text": "Following Definition 3, Solution 2 is proposed to identify the overlapping ambiguity string.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Output:", |
| "sec_num": null |
| }, |
| { |
| "text": ", if , then is an OAS, is the overlapping chain string, and the overlapping chain length is .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution 2: For", |
| "sec_num": null |
| }, |
| { |
| "text": "For example, in Figure 6 , because , , ,", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 16, |
| "end": 25, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Solution 2: For", |
| "sec_num": null |
| }, |
| { |
| "text": "is an OAS, overlapping chain length equals 1 ( ). Algorithm 2 implements the Solution 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Solution 2: For", |
| "sec_num": null |
| }, |
| { |
| "text": "Overlapping Ambiguity Identification on SM Input:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 2", |
| "sec_num": null |
| }, |
| { |
| "text": "1. SegMatrix[][], SM;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 2", |
| "sec_num": null |
| }, |
| { |
| "text": "2. , length of the given sentence;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 2", |
| "sec_num": null |
| }, |
| { |
| "text": "3. , length of the longest lexicon entry;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm 2", |
| "sec_num": null |
| }, |
| { |
| "text": "Overlapping ambiguity is stored in vector ;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Output:", |
| "sec_num": null |
| }, |
| { |
| "text": "Method:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Output:", |
| "sec_num": null |
| }, |
| { |
| "text": "(0) for(int ; ; ){ (1) for(int ; && ; ){ (2) if(SegMatrix[i][j]){ \\\\ (3) for(int ; ; ){ (4) for(int ; ; ){ (5) if(SegMatrix[i'][j'])\\{ \\\\ (6) .push_back(i, j, i', j'); (7) } } } } } }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Output:", |
| "sec_num": null |
| }, |
| { |
| "text": "In Algorithm 2, four loops are used. The outer loop has index . The other three nested loops have cycle index less than . The complexity of overlapping ambiguity identification is . Using Algorithm 2, instead of MOAS, each overlapping ambiguity string is recognized individually. After every OAS is identified, MOAS can be obtained by merging OAS that is addable (See Eq. (2)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Output:", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section, we illustrate how Chinese word segmentation algorithms can be implemented on SM. Four lexicon based methods (FMM, BMM, BiMM and Omni-segmentation) are discussed. By mapping string operations into set operations, these processes only implement Boolean operations, which reduces the computing complexity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation on SM", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "FMM is implemented by scanning each row of SM from right to left. If equals 1, then hold the coordinate as , and scan from the th row again. Iterate this way until the end of the SM ( ) is reached. Then, is a segmentation path of FMM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FMM", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Step 1 to 3 give an example of this algorithm. Figure 7(a) shows the visualized process.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 47, |
| "end": 58, |
| "text": "Figure 7(a)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "FMM", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Step 1: Scan the th row from Column 6 to Column 0. Hit 1 at , then set 1 as .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FMM", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Step 2: Scan the th row in the same way, if equals 1, record every .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FMM", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Step 3: Iterate this way until the column-coordinate equals 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FMM", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "As shown in Figure 7(a) , the output is . ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 23, |
| "text": "Figure 7(a)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "FMM", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "BMM is similar as FMM. The difference is that BMM processes the last column first and scan each column from top to bottom. If equals 1, hold j and restart from th column again until the th column is reached.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BMM", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "For example, in Figure 7 (b), first hit 1 at , then hold 6 and scan from the th column again. In Column 2, equals 1, restart from the th column until the column-coordination 0 is met. The output of BMM in this example is .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 16, |
| "end": 24, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "BMM", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "BiMM is implemented by running both FMM and BMM respectively. Two outputs are compared. Let and to be the output of FMM and BMM. Comparing and from right to left, if both coordinates 9 have the same value, hold this same value and decrease both by 1. Then, compare the new coordinates again and always update the held value if both are equal, until the unequal value is met for the first time. Now, the held value is the end of OAS (e.g. in and ). Subtract 1 to the coordinate with larger value. Continue this way, until the equal value is found again (e.g. in and ). At last, it is the start of the OAS. In this example, the OAS is . Iterate this way until both and are traversed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiMM on SM", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "Omni-segmentation tries to find every segmentation path in a given sentence. The number of segmentation paths may explode tremendously. It has the highest computational and space complexity. Based on SM, utilizing segmentation ambiguity information, we can apply Omni-segmentation method on substrings that have the segmentation ambiguity problem, then reducing generated segmentation paths.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Omni-segmentation on SM", |
| "sec_num": "4.2.4" |
| }, |
| { |
| "text": "For the reason that Omni-segmentation is useful in Chinese NLP, e.g., Chen et al. (2014) , Chen et al. (2015a) , we give the algorithm that can be implemented on SM. As shown in Figure 8 , this algorithm utilizes an iterative method, which can make better use of sentence structure information.", |
| "cite_spans": [ |
| { |
| "start": 70, |
| "end": 88, |
| "text": "Chen et al. (2014)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 91, |
| "end": 110, |
| "text": "Chen et al. (2015a)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 178, |
| "end": 186, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Omni-segmentation on SM", |
| "sec_num": "4.2.4" |
| }, |
| { |
| "text": "Suppose that one output of the th iteration is . In the th iteration, the th Row is processed. In Step (1), the largest j with is obtained. In Step (2), add j into segmentation path and judge whether or not j is the end of a sentence. If it equals , then is a segmentation path. If not, iterate this process. For all in Row , recycle from Step (3) Step (7).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 8. Omni-Segmentation on SM", |
| "sec_num": null |
| }, |
| { |
| "text": "This algorithm may generate tremendous segmentation paths. The combinational ambiguity can be filtered for simplicity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 8. Omni-Segmentation on SM", |
| "sec_num": null |
| }, |
| { |
| "text": "In order to identify CSA, both Omni-segmentation and BiMM methods try to segment a sentence for finding possible segmentation paths. The lexicon is required to be accessed frequently, and generated segmentation paths must be held for comparison. These processes involve massive string manipulations and string storages, leading to a higher computational and space complexity. In SM, after SM was built, string operations are mapped into set operations. There is no need for SM to implement string operations and access the lexicon. Moreover, SM can make better use of sentence structure information, decreasing the computational complexity. This section discusses the complexity of SM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complexity", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "to be the number of lexicon entry, is the length of a given sentence, is the length of the longest lexicon entry. In a given lexicon, is a constant. The computational and space complexity are given as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Let", |
| "sec_num": null |
| }, |
| { |
| "text": "Searching for an element in a lexicon has computational complexity . Finding every word in a sentence need access lexicon times in the worst case. Therefore, the construction of SM has computational complexity . This is the same as FMM or BMM. Because BiMM implements both FMM and BMM, so BiMM has complexity . Where denotes the complexity to compare the output of FMM and BMM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Complexity", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "In the worst scenario, SM has elements equal 1. In order to identify each overlapping ambiguity, for each , elements of should be scanned. So identification of the OAS has complexity . Because and have the similar order, identification of OAS on SM has computational complexity close to BiMM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Computational Complexity", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "BiMM generates two FMM and BMM output. The space complexity for BiMM is constant. When Omni-segmentation is employed, segmentation path can grow tremendously, therefore, leading to a higher space complexity. In BiMM and Omni-segmentation, generated segmentation paths need to be held for OAS or CAS identification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Space Complexity", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "In the SM method, a matrix is required. It seems that the space complexity for SM is . But in practical application, we deal with a sentence or sentence fragment. Storages of matrix can be used repeatedly. OAS or CAS can be identified directly without saving any segmentation path. Therefore, space complexity of SM also closes to BiMM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Space Complexity", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "Before conducting experiments, we give an example to show the comparison between SM and BiMM. The sentence is given as \"\u6c5f\u6cfd\u6c11\u5728\u5317\u4eac\u4eba\u6c11\u5927\u4f1a\u5802\u4f1a\u89c1\u53c2\u52a0\u5168\u56fd\u6cd5\u9662\u5de5\u4f5c\u4f1a\u8bae\u548c \u5168\u56fd\u6cd5\u9662\u7cfb\u7edf\u6253\u51fb\u7ecf\u6d4e\u72af\u7f6a\u5148\u8fdb\u96c6\u4f53\u8868\u5f70\u5927\u4f1a\u4ee3\u8868\u65f6\u8981\u6c42\u5927\u5bb6\u8981\u5145\u5206\u8ba4\u8bc6\u6253\u51fb\u7ecf\u6d4e\u72af\u7f6a \u5de5\u4f5c\u7684\u8270\u5de8\u6027\u548c\u957f\u671f\u6027\". The outputs of FMM and BMM are listed as follows 10 : (0, 1, 2, 3, 5, 7, 9, 11, 12, 14, 16, 18, 20, 22, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 46, 48, 50, 51, 53, 55, 57, 59, 61, 62); (0, 1, 2, 3, 5, 7, 8, 10, 12, 14, 16, 18, 20, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 46, 48, 50, 51, 53, 55, 57, 59, 61, 62) ;", |
| "cite_spans": [ |
| { |
| "start": 251, |
| "end": 527, |
| "text": "(0, 1, 2, 3, 5, 7, 9, 11, 12, 14, 16, 18, 20, 22, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 46, 48, 50, 51, 53, 55, 57, 59, 61, 62); (0, 1, 2, 3, 5, 7, 8, 10, 12, 14, 16, 18, 20, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 46, 48, 50, 51, 53, 55, 57, 59, 61, 62)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Making use of the BiMM method, 2 MOAS are detected: , . However, if SM method is employed, 5 MOAS and 10 OAS are found: , , ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5." |
| }, |
| { |
| "text": "This example shows that BiMM is insufficient for OAS identification. To show more information, based on the Beijing University corpus (PKU corpus) of the Chinese word segmentation Bakeoff training data (Sproat et al., 2003) , SM is compared with BiMM, DAG, and MNAG methods, as shown in Table 1 . refers to the number of annotated relation instances. F-score is computed by", |
| "cite_spans": [ |
| { |
| "start": 202, |
| "end": 223, |
| "text": "(Sproat et al., 2003)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 287, |
| "end": 294, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5." |
| }, |
| { |
| "text": "In the PKU corpus, the training data and testing data are provided. In the Penn Chinese Treebank corpus, the 5-fold cross validation is adopted for training and testing. We average the results of five runs as the final performance. To implement the maximum entropy classifiers, we used the toolkit provided by Zhang (2004) . We also run a CRF model for comparison 11 .", |
| "cite_spans": [ |
| { |
| "start": 310, |
| "end": 322, |
| "text": "Zhang (2004)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Characteristics of CSA have been investigated by other research. Sun et al. (1999) analyzed MOAS in a corpus containing 100 million characters. Li et al. (2003) studied 730,000 MOAS extracted from 20 years the People's Daily corpus. Li et al. (2006) collected 14,906 high frequent MOAS from the People's Daily corpus. Qiao et al. (2008) investigated MOAS in several corpora, which has more than 1 billion characters.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 82, |
| "text": "Sun et al. (1999)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 144, |
| "end": 160, |
| "text": "Li et al. (2003)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 233, |
| "end": 249, |
| "text": "Li et al. (2006)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 318, |
| "end": 336, |
| "text": "Qiao et al. (2008)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Characteristics of Chinese Segmentation Ambiguity", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In general, these research is mainly focused on analysing MOAS for a given corpus. Rare research was conducted to study the characteristics of CSA in a given lexicon. This section is devoted to this. The Lexicon of Common Words in Contemporary Chinese is employed, which contains 56,008 words and is published by the Ministry of Education of the People's Republic of China in 2008. Table 2 shows detected segmentation ambiguity information. CAS (OAS) Inside Word refers to overlapping (or combinational) ambiguity strings in a single word. OAS in Overlapping Words refers to overlapping ambiguity strings that are generated by overlapping two possible words. For example, \"\u6587\u79d1\" (Liberal Arts) and \"\u79d1\u5b66\" (Science) can be overlapped to generate an OAS \"\u6587\u79d1\u5b66\". OAS in Adjacent Words denotes ambiguity strings generated by two adjacent words (no overlapping). For example, \"\u70b9\" (Point) and \"\u5c04\u95e8\" (Shot) can be combined into an OAS \"\u70b9\u5c04\u95e8\". It can be segmented as \"\u70b9/\u5c04\u95e8\" (Point/ Shot) or \"\u70b9\u5c04/\u95e8\" (Fixed Fire/ Door). Total OAS Types is produced by merging results in OAS in Overlapping Words and OAS in Adjacent Words. It can be seen as the overlapping ambiguity space.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 382, |
| "end": 389, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Characteristics of Chinese Segmentation Ambiguity", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "As shown in Table 1 , combinational ambiguities inside words are pervasive, even the Definition 4 is adopted. Except 2,927 words containing only single character, 75.25% words have combinational ambiguity. The Total OAS Types has the same number as OAS in Overlapping Word, so that OAS in Adjacent Words can be seen as a subset of OAS in Overlapping Words.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 19, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Characteristics of Chinese Segmentation Ambiguity", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In the following, we investigate CSA in a large-scale corpus. The corpus contains 52,961 texts involving various literary genres. Because CSA cannot exist across punctuation. Therefore, instead of whole sentences, we take sentence fragments under consideration. After erasing duplicated sentence fragments, there are 0.2 billion sentence fragments remained. The information is shown in Table 3 . Figure 9 shows the distribution of sentence fragment length in our corpus.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 386, |
| "end": 393, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 396, |
| "end": 404, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Characteristics of Chinese Segmentation Ambiguity", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Almost 99% sentence fragments have length less than 26. Therefore, we set 128 as the size of SM. Longer sentence fragments are removed directly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9. Distribution of Sentence Fragments Length", |
| "sec_num": null |
| }, |
| { |
| "text": "For each sentence fragment, Algorithm 1 and Algorithm 2 (See Section 4.1) are adopted to extract CAS and OAS. MOAS is obtained by merging OAS that are addable. These MOAS are referred to as SM-MOAS. In order to give a comparison, BiMM (See Section 4.2.3) is implemented to extract MOAS, referred as BiMM-MOAS. Table 4 gives information about CAS, OAS, SM-MOAS and BiMM-MOAS. Referring to Table 2 , nearly all CAS types occurred in our corpus, but only 39.66% OAS types occurred. If the BiMM method is used, there are 91.52% sentence fragments having no overlapping ambiguity. If the SM method is employed, the rate reduces to 88.38%. It means that, by simple method, it is possible to get a massive sentence fragments without overlapping ambiguity for unsupervised methods (Li et al., 2003) .", |
| "cite_spans": [ |
| { |
| "start": 773, |
| "end": 790, |
| "text": "(Li et al., 2003)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 310, |
| "end": 317, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 388, |
| "end": 395, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 9. Distribution of Sentence Fragments Length", |
| "sec_num": null |
| }, |
| { |
| "text": "From Table 4 , compared SM-MOAS and BiMM-MOAS, it can be seen that the number of SM-MOAS type is doubled. Therefore, only focusing on MOAS produced by BiMM is insufficient to study the CSA problem. In Figure 10 , distributions of overlapping chain length about MOAS and OAS are compared.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 12, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 201, |
| "end": 210, |
| "text": "Figure 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 9. Distribution of Sentence Fragments Length", |
| "sec_num": null |
| }, |
| { |
| "text": "It can be seen that the distribution of overlapping chain length in MOAS is more complex, ranging from 1 to 39. However, it is simpler in OAS. There are 99.87% OAS has overlapping chain length equal to 1. This conclusion is useful for disambiguation. It can be modelled as a two-category classification problem. Figure 11 shows the distributions of different CSA. X-axis represents the percentage of ambiguity string types in frequency-descending order. Y-axis is the percentage of occurrences.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 312, |
| "end": 321, |
| "text": "Figure 11", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 10. Distribution of Overlapping Chain Length", |
| "sec_num": null |
| }, |
| { |
| "text": "represents x% of the highest frequency ambiguity string types covering y% occurrences. For each type of segmentation ambiguity, 10% high frequency ambiguity string types occupy 90% occurrences. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 10. Distribution of Overlapping Chain Length", |
| "sec_num": null |
| }, |
| { |
| "text": "Based on FMM and BMM, Sun et al. (1995) analyzed the influence of CSA on Chinese word segmentation. Several conclusions were induced. These analyses mainly based on a corpus containing only 3,680 sentences. The influence of combinational ambiguity on Chinese word segmentation wasn't studied. In this experiment, the Penn Chinese Treebank corpus 12 is used to analyze the influence of CSA on Chinese word segmentation. This corpus is manually segmented, consisting of 2,448 text files, 71,232 sentences, 1,196,329 words and 1,931,381 Hanzi (Chinese character). The segmentation ambiguity information is given in Table 5 . Characteristics of CSA about the Penn Chinese Treebank are the same as our corpus discussed in Section 5.1.", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 39, |
| "text": "Sun et al. (1995)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 612, |
| "end": 619, |
| "text": "Table 5", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Influence of Ambiguities on Word Segmentation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Before given the experiment in detail, we first explain the terms used in this part. The meaning of ambiguity free has two levels. The first is that, for a given lexicon, a sentence has no segmentation ambiguity. The other is that a sentence may contain segmentation ambiguity that cannot be identified by an employed method. SM can identify every segmentation ambiguity. Therefore, SM Free means that a sentence contains no segmentation ambiguity at all. But BiMM Free is not. It only means that no segmentation ambiguity can be identified by the BiMM method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Influence of Ambiguities on Word Segmentation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Because sentence boundaries are manually labelled in the corpus, therefore, instead of segmentation fragments, the sentences are directly used as segmenting units, which contains 71,232 sentences. Among them, 56,618 sentences are BiMM free, and 43,444 sentences are SM free. Then, FMM or BMM is employed to segment collected sentences 13 . Performances are given in Table 6 . Column 2 in Table 6 lists the number of selected sentences. Using FMM (or BMM) method, the performance is already upto 93.59% (93.58%) in F-score. In Row 3, if both FMM and BMM have the same output (BiMM Free), the precision is 96.67%. In Row 4, SM free means that there is no overlapping ambiguity at all, but segmentation precision is only 96.74%. Therefore, combinational ambiguity can cause segmentation errors about 3.3%.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 366, |
| "end": 373, |
| "text": "Table 6", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 388, |
| "end": 395, |
| "text": "Table 6", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Influence of Ambiguities on Word Segmentation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Making use of syntax and semantic knowledge, machine learning methods are successful to process the CSA problem. But these methods also have the disadvantage that annotated data is required, which is time-consuming and costly in human labor, and migrating between different applications is difficult. Lexicon-based method is easier to be implemented and has reasonable performance. Because only a lexicon is required for segmentation, the requirement for annotated data and the process for training are avoided. Therefore, the lexicon-based method still be used in this field. In this section, the influence of lexicon size on Chinese word segmentation is studied. We use the PKU testing data. The FMM is used as the default method. This issue was discussed by other researchers (e.g. ), but no quantitative analysis is given. The result is shown in Table 7 . In Table 7 , five lexicons are employed. Testing Words are words collected from the testing data. Words in Training Words are collected from training data. Performances generated by both are used as the topline and baseline in the Chinese word segmentation Bakeoff competition (Emerson, 2005) . CWCC denotes the Lexicon of Common Words in Contemporary Chinese. Medium Lexicon is collected from the Internet, which contains 298,032 words. Maximum Lexicon is generated by merging Medium Lexicon and a Great Dictionary of Chinese with 542,240 lexicon.", |
| "cite_spans": [ |
| { |
| "start": 1137, |
| "end": 1152, |
| "text": "(Emerson, 2005)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 850, |
| "end": 857, |
| "text": "Table 7", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 863, |
| "end": 870, |
| "text": "Table 7", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Influence of Lexicon size on Chinese Word Segmentation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In Chinese word segmentation, OOV (out-of-vocabulary) is considered as the main obstacle to segment a sentence (Sproat et al., 2003; Huang et al., 2007) . Comparing Row 7 to Row 2 and Row 6 to Row 5, after testing words was added, the performances increases 9.15% and 5.36% respectively. When the lexicon size increased, the influence of OOV is slacked down. By comparing the Row 6 to Row 7, the lexicon used by Row 7 is a subset of Row 6, but Row 6 is lower than Row 7 about 10.36%. It is caused by overlapping and combinational ambiguities. Row 1 and Row 6 also have the same problem. Without segmentation disambiguation, increasing lexicon size can result in worse performance in lexicon-based methods. In order to see the influences in more details, Table 8 lists the number of errors caused in the segmentation. In Table 8 , the strategy to count the number of errors is explained as follows. If a segmentation path \"A/ BC\", is falsely segmented into \"AB/ C\" (A, B and C are characters). Then this failure is counted as an OAS error. If a segmentation path \"A/ B\" is falsely segmented into a word \"AB\" (combinational ambiguity), it is counted as a CAS error. An OOV error is caused by a word (e.g. \"AB\") falsely segmented into small pieces (\"A/ B\"). Figure 12 compares errors caused by OAS, CAS and OOV.", |
| "cite_spans": [ |
| { |
| "start": 111, |
| "end": 132, |
| "text": "(Sproat et al., 2003;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 133, |
| "end": 152, |
| "text": "Huang et al., 2007)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 754, |
| "end": 761, |
| "text": "Table 8", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 820, |
| "end": 827, |
| "text": "Table 8", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 1255, |
| "end": 1264, |
| "text": "Figure 12", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Influence of Lexicon size on Chinese Word Segmentation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "As shown in Figure 12 , a larger dictionary decreases the OOV rate at the expense of increasing errors caused by OAS and CAS. When the size of the lexicon is large enough, without segmentation disambiguation, errors caused by CAS and OAS can exceed those caused by OOV. OAS is considered as a bottleneck of Chinese word segmentation. The result shows that, if the lexicon is large enough, the influence of CAS is the most critical. In practical applications, an encyclopedic dictionary with large number of lexicon entry is commonly adopted (Chien, 1997; Gao et al., 2002) . The result indicates that the influence of CAS is important. For the lexicon-based method, increasing lexicon entry does not always guarantee better performance.", |
| "cite_spans": [ |
| { |
| "start": 541, |
| "end": 554, |
| "text": "(Chien, 1997;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 555, |
| "end": 572, |
| "text": "Gao et al., 2002)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 21, |
| "text": "Figure 12", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 12. Influence of Lexicon Size on CSA and OOV", |
| "sec_num": null |
| }, |
| { |
| "text": "For traditional statistical-based methods, word segmentation is the way to find the segmentation path which has a maximized probability. Conditional Random Fields (CRF) received the state-of-the-art performances (Peng et al., 2004; Tseng et al., 2005) . The proposed SM method is effective to identify lexical ambiguities, but weak for segmentation disambiguation. For segmenting a sentence based on SM, instead of finding a maximized segmentation path, the process can be divided into three steps: OOV detection, OAS disambiguation and CAS disambiguation. In the OOV detection step, new words are detected by an employed model 14 , which reduces errors caused by the OOV problem. After sentences were segmented (e.g., by lexicon based method), the output can be further processed by OAS disambiguation and CAS disambiguation.", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 231, |
| "text": "(Peng et al., 2004;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 232, |
| "end": 251, |
| "text": "Tseng et al., 2005)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SM Segmentation", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In this part, a preliminary experiment is given to demonstrate this process. The \"closed test\" is conducted based on the PKU corpus (Emerson, 2005) . To make a comparison, a CRF model is implemented 15 , which uses 3-Gram features and character features. The result is shown in Table 9 , where Column OAS is the number of errors caused by OAS. Column CAS and Column OOV are the same. In Table 9 , the SM+OOV implements FMM, which uses words outputted by the CRF model (Row 2) and word extracted from training data. Comparing SM+OOV with FMM (Row 1), errors caused by OOV are reduced considerably. However, errors caused by OAS and CAS are increased. In the SM+OOV+OAS, another CRF model is trained only on OAS extracted from the training data, then implement the OAS disambiguation. In SM+OOV+OAS+CAS, a maximum entropy model is used to disambiguate CAS of SM+OOV+OAS. It also trained on CAS extracted from training data. The result shows the performance is increased by OAS disambiguation and CAS disambiguation. Comparing with the CRF model, both show a similar performance.", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 147, |
| "text": "(Emerson, 2005)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 278, |
| "end": 285, |
| "text": "Table 9", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 387, |
| "end": 394, |
| "text": "Table 9", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "SM Segmentation", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Based on SM, the segmentation divides word segmentation into three steps. It provides an alternative way to process word segmentation. Making use of SM, each step can be optimized accordingly. However, the result also shows that the disambiguation of OAS and CAS are not independent. Decreasing one of them can influence the other. From Row 2 to Row 5, the CAS is also a challenging task for segmenting Chinese words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SM Segmentation", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In this paper, a SM method was provided, which represents lexicon information as a matrix. Under the framework of set theory, formal definitions about segmentation path, combinational ambiguity, overlapping ambiguity, etc. are given. By mapping string operations into set operation, SM is effective in CSA identification and also available for Chinese word segmentation. In our experiments, several issues about CSA were explored. For researchers who are interested in our work, the source code of our SM is available at https://github.com/YPench/SMatrix/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6." |
| }, |
| { |
| "text": "It can be translated into: \"when atoms combine into molecules\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that: all character positions are indexed from 0 to .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, the algorithms are given in C++ codes. Some changes are made for the sake of simplicity and convenience.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In this place, segmentation path are seen as a vector. The values of are , , etc. The coordinates of are the position index of this vector. For example, in , the value of coordinate 0 is .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The output may differ when different lexicon is adopted. In this place, we take the Lexicon Common Words in Contemporary Chinese.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://crfpp.googlecode.com/svn/trunk/doc/index.html.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.cis.upenn.edu/~chinese/ctb.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The employed lexicon is directly extracted from the same corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In our experiment, we use CRF (trained on the training data) to segment the testing data, then collect generated new words. 15 http://crfpp.googlecode.com/svn/trunk/doc/index.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The research was supported in part by the National Science Foundation of China under Grant Nos. 91118005, 91218301, 91418205.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Identification and disposal of ambiguity based on Omni-Segmentation arithmetic", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computer Engineering and Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chang, C. & Wei, J. (2008). Identification and disposal of ambiguity based on Omni-Segmentation arithmetic. Computer Engineering and Applications, 15.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Unknown word detection for Chinese by a corpus-based learning method", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Bai", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "International Journal of Computational Linguistics and Chinese Language Processing", |
| "volume": "3", |
| "issue": "1", |
| "pages": "27--44", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, K. & Bai, M. (1998).Unknown word detection for Chinese by a corpus-based learning method. International Journal of Computational Linguistics and Chinese Language Processing, 3(1), 27-44.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Eliminate Semantic Network Word Segmentation Ambiguity Method Research", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Microelectronics & Computer", |
| "volume": "3", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, X., Li, L., & Liang, X. (2012). Eliminate Semantic Network Word Segmentation Ambiguity Method Research. Microelectronics & Computer, 3, 045.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Omni-word Feature and Soft Constraint for Chinese Relation Extraction", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "The Proceedings of ACL'14", |
| "volume": "", |
| "issue": "", |
| "pages": "572--581", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, Y., Zheng, Q., & Zhang, W. (2014). Omni-word Feature and Soft Constraint for Chinese Relation Extraction. The Proceedings of ACL'14, 572-581.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Feature assembly method for extracting relations in Chinese", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Artificial Intelligence", |
| "volume": "228", |
| "issue": "", |
| "pages": "179--194", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, Y., Zheng, Q., & Chen, P. (2015a). Feature assembly method for extracting relations in Chinese. Artificial Intelligence, 228, 179-194.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A Boundary Assembling Method for Chinese Entity-Mention Recognition. Intelligent Systems", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "IEEE", |
| "volume": "30", |
| "issue": "6", |
| "pages": "50--58", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, Y., Zheng, Q., & Chen, P. (2015b). A Boundary Assembling Method for Chinese Entity-Mention Recognition. Intelligent Systems, IEEE, 30(6), 50-58.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "PAT-tree-based keyword extraction for Chinese information retrieval", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Chien", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "ACM SIGIR Forum", |
| "volume": "31", |
| "issue": "SI", |
| "pages": "50--58", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chien, L. (1997). PAT-tree-based keyword extraction for Chinese information retrieval. ACM SIGIR Forum, 31(SI), 50-58.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The second international chinese word segmentation bakeoff", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Emerson", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "The Proceedings of SIGHAN '05", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emerson, T. (2005). The second international chinese word segmentation bakeoff. The Proceedings of SIGHAN '05, 133.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Toward a unified approach to statistical language modeling for Chinese", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "ACM Transactions on Asian Language Information Processing", |
| "volume": "1", |
| "issue": "1", |
| "pages": "3--33", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gao, J., Goodman, J., Li, M., & Lee, K. (2002).Toward a unified approach to statistical language modeling for Chinese. ACM Transactions on Asian Language Information Processing, 1(1), 3-33.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Dealing with Chinese Overlapping Ambiguity Based on Type Functional Application", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "The Proceedings of AICI '09", |
| "volume": "3", |
| "issue": "", |
| "pages": "67--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gao, D., & Guo, J. (2009). Dealing with Chinese Overlapping Ambiguity Based on Type Functional Application. The Proceedings of AICI '09, 3, 67-71.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Research on Chinese phonetic string segmentation of sentential input", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "The Proceedings of CECNet '11", |
| "volume": "", |
| "issue": "", |
| "pages": "4334--4337", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gao, Y., He, J., & Li, J. (2011). Research on Chinese phonetic string segmentation of sentential input. The Proceedings of CECNet '11, 4334-4337.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Chinese word segmentation and named entity recognition: A pragmatic approach", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Computational Linguistics", |
| "volume": "31", |
| "issue": "4", |
| "pages": "531--574", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gao, J., Li, M., Wu, A., & Huang, C. (2005). Chinese word segmentation and named entity recognition: A pragmatic approach. Computational Linguistics, 31(4), 531-574.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Incremental joint approach to word segmentation, POS tagging, and dependency parsing in Chinese", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Hatori", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Matsuzaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "The Proceedings of ACL '12", |
| "volume": "1", |
| "issue": "", |
| "pages": "1045--1053", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hatori, J., Matsuzaki, T., Miyao, Y., & Tsujii, J. (2012). Incremental joint approach to word segmentation, POS tagging, and dependency parsing in Chinese. The Proceedings of ACL '12, 1, 1045-1053.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Chinese Word Segmentation: A Decade Review", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Journal of Chinese Information Processing", |
| "volume": "21", |
| "issue": "3", |
| "pages": "8--19", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Huang, C., & Zhao, H. (2007). Chinese Word Segmentation: A Decade Review. Journal of Chinese Information Processing, 21(3), 8-19.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Word lattice reranking for Chinese word segmentation and part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Mi", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "The Proceedings of COLING '08", |
| "volume": "1", |
| "issue": "", |
| "pages": "385--392", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiang, W., Mi, H., & Liu, Q. (2008). Word lattice reranking for Chinese word segmentation and part-of-speech tagging. The Proceedings of COLING '08, 1, 385-392.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Research on Chinese Word Segmentation and proposals for improvement", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, B. (2011). Research on Chinese Word Segmentation and proposals for improvement. (Master thesis). Roskilde University, Denmark.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Unsupervised training for overlapping ambiguity resolution in Chinese word segmentation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "The Proceedings of SIGHAN '03", |
| "volume": "17", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, M., Gao, J., Huang, C., & Li, J. (2003). Unsupervised training for overlapping ambiguity resolution in Chinese word segmentation. The Proceedings of SIGHAN '03, 17, 1-7.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Punctuation as implicit annotations for Chinese word segmentation", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Computational Linguistics", |
| "volume": "35", |
| "issue": "4", |
| "pages": "505--512", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, Z., & Sun, M. (2009). Punctuation as implicit annotations for Chinese word segmentation. Computational Linguistics, 35(4), 505-512.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Joint models for Chinese POS tagging and dependency parsing", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "The Proceedings of EMNLP '11", |
| "volume": "", |
| "issue": "", |
| "pages": "1180--1191", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, Z., Zhang, M., Che, W., Liu, T., et al. (2011). Joint models for Chinese POS tagging and dependency parsing. The Proceedings of EMNLP '11, 1180-1191.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Written Chinese word segmentation system-CDWS", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "Journal of Beijing Institute of Aeronautics and Astronautics", |
| "volume": "4", |
| "issue": "", |
| "pages": "97--104", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang, N. (1984). Written Chinese word segmentation system-CDWS. Journal of Beijing Institute of Aeronautics and Astronautics, 4, 97-104.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Covering ambiguity resolution in Chinese word segmentation based on contextual information", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Tsou", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "The Proceedings of COLING '02", |
| "volume": "1", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luo, X., Sun, M., & Tsou, B. (2002).Covering ambiguity resolution in Chinese word segmentation based on contextual information. The Proceedings of COLING '02, 1, 1-7.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Chinese segmentation and new word detection using conditional random fields", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "The Proceedings of COLING '04", |
| "volume": "562", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peng, F., Feng, F., & McCallum, A. (2004). Chinese segmentation and new word detection using conditional random fields. The Proceedings of COLING '04, 562.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Statistical properties of overlapping ambiguities in Chinese word segmentation and a strategy for their disambiguation. Text, Speech and Dialogue", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Qiao", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Menzel", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "177--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qiao, W., Sun, M., & Menzel, W. (2008). Statistical properties of overlapping ambiguities in Chinese word segmentation and a strategy for their disambiguation. Text, Speech and Dialogue, 177-186.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "The first international Chinese word segmentation bakeoff", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Sproat", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Emerson", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "The Proceedings of SIGHAN '03", |
| "volume": "17", |
| "issue": "", |
| "pages": "133--143", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sproat, R., & Emerson, T. (2003). The first international Chinese word segmentation bakeoff. The Proceedings of SIGHAN '03, 17, 133-143.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "A stacked sub-word model for joint Chinese word segmentation and part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "The Proceedings of ACL '11", |
| "volume": "1", |
| "issue": "", |
| "pages": "1385--1394", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sun, W. (2011). A stacked sub-word model for joint Chinese word segmentation and part-of-speech tagging. The Proceedings of ACL '11, 1, 1385-1394.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Ambiguity Resolution in Chinese Word Segmentation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Benjamin", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "The Proceedings of PACLIC '95", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sun, M., & Benjamin K (1995). Ambiguity Resolution in Chinese Word Segmentation. The Proceedings of PACLIC '95.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "An ambiguity discovery algorithm on Chinese word segmentation based dictionary", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "The Proceedings of WMWA '09", |
| "volume": "", |
| "issue": "", |
| "pages": "39--42", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sun, T., Liu, Y., Yang, L., et al. (2009). An ambiguity discovery algorithm on Chinese word segmentation based dictionary. The Proceedings of WMWA '09, 39-42.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "A review and evaluation on automatic segmentation of Chinese", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Tsou", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Contemporary Linguistics", |
| "volume": "3", |
| "issue": "1", |
| "pages": "22--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sun, M., & Tsou, B. (2001). A review and evaluation on automatic segmentation of Chinese. Contemporary Linguistics, 3(1), 22-32.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "A critical appraisal of the research on Chinese word segmentation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Zou", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Contemporary Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sun, M., & Zou, J. (2001). A critical appraisal of the research on Chinese word segmentation. Contemporary Linguistics, 1, 002.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "The role of high frequent maximal crossing ambiguities in Chinese word segmentation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Zuo", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Tsou", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Journal of Chinese information processing", |
| "volume": "13", |
| "issue": "1", |
| "pages": "27--37", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sun, M., Zuo, Z., & Tsou, B. (1999). The role of high frequent maximal crossing ambiguities in Chinese word segmentation. Journal of Chinese information processing, 13(1), 27-37.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "A compression-based algorithm for Chinese word segmentation", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Teahan", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Wen", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mcnab", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Witten", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Computational Linguistics", |
| "volume": "26", |
| "issue": "3", |
| "pages": "375--393", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Teahan, W., Wen, Y., McNab, R., & Witten, I. (2000). A compression-based algorithm for Chinese word segmentation. Computational Linguistics, 26(3), 375-393.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "A conditional random field word segmenter for sighan bakeoff", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Tseng", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "The Proceedings of SIGHAN '05", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tseng, H., Chang, P., Andrew, G., et al. (2005). A conditional random field word segmenter for sighan bakeoff 2005. The Proceedings of SIGHAN '05, 171.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "A Chinese Overlapping Ambiguity Resolution Method Based on Coupling Degree of Double Characters", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Wand", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Journal of Chinese Information Processing", |
| "volume": "5", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wand, S., & Wang, B. (2007). A Chinese Overlapping Ambiguity Resolution Method Based on Coupling Degree of Double Characters. Journal of Chinese Information Processing, 5, 004.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "A Method of Sentence Segmentation That Check All Overlapping Ambiguity", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Acta Electronica Sinica", |
| "volume": "32", |
| "issue": "1", |
| "pages": "50--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wang, X., & Du, L. (2004). A Method of Sentence Segmentation That Check All Overlapping Ambiguity. Acta Electronica Sinica, 32(1), 50-54.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Integrating generative and discriminative character-based models for Chinese word segmentation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Zong", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "ACM Transactions on Asian Language Information Processing", |
| "volume": "11", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wang, K., Zong, C., & Su, K. (2012). Integrating generative and discriminative character-based models for Chinese word segmentation. ACM Transactions on Asian Language Information Processing, 11(2), 7.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "A Lattice-based Framework for Joint Chinese Word Segmentation, POS Tagging and Parsing", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Zong", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "The Proceedings of ACL '13", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wang, Z., Zong, C., & Xue, N. (2013). A Lattice-based Framework for Joint Chinese Word Segmentation, POS Tagging and Parsing. The Proceedings of ACL '13, 2013.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Word segmentation in sentence analysis", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "The Proceedings of ICCIP '98", |
| "volume": "", |
| "issue": "", |
| "pages": "169--180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wu, A., & Jiang, Z. (1998). Word segmentation in sentence analysis. The Proceedings of ICCIP '98, 169-180.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Parsing-based Chinese word segmentation integrating morphological and syntactic information", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "The Proceedings of NLP-KE '11", |
| "volume": "", |
| "issue": "", |
| "pages": "114--121", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wu, X., Zhang, M., & Lin, X. (2010). Parsing-based Chinese word segmentation integrating morphological and syntactic information. The Proceedings of NLP-KE '11, 114-121.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Chinese word segmentation as character tagging", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Computational Linguistics and Chinese Language Processing", |
| "volume": "8", |
| "issue": "", |
| "pages": "29--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xue, N. (2003). Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1), 29-48.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Chinese sentence segmentation as comma classification", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "The Proceedings of ACL '11", |
| "volume": "2", |
| "issue": "", |
| "pages": "631--635", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xue, N., & Yang, Y. (2011). Chinese sentence segmentation as comma classification. The Proceedings of ACL '11, 2, 631-635.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "An algorithm of solving Chinese segmentation overlapping ambiguous", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "The Proceedings of CSAE '12", |
| "volume": "2", |
| "issue": "", |
| "pages": "464--467", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yao, H., Wang, Y., & Huang, J. (2012). An algorithm of solving Chinese segmentation overlapping ambiguous. The Proceedings of CSAE '12, 2, 464-467.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Domain-specific Chinese word segmentation using suffix tree and mutual information", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zeng", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Chau", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Information Systems Frontiers", |
| "volume": "13", |
| "issue": "1", |
| "pages": "115--125", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeng, D., Wei, D., Chau, M., & Wang, F. (2011). Domain-specific Chinese word segmentation using suffix tree and mutual information. Information Systems Frontiers, 13(1), 115-125.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Graph-based Semi-Supervised Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Zeng", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Wong", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Chao", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Trancoso", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "The Proceeding of ACL '13", |
| "volume": "", |
| "issue": "", |
| "pages": "770--779", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeng, X., Wong, D., Chao, L., & Trancoso, I. (2013). Graph-based Semi-Supervised Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging. The Proceeding of ACL '13, 770-779.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Maximum entropy modeling toolkit for Python and C++", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, L. (2004). Maximum entropy modeling toolkit for Python and C++. Natural Language Processing Lab, Northeastern University, China.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "A new Context-Sensitive ambiguous phrase segmentation Algorithm", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Computer Systems & Applications", |
| "volume": "", |
| "issue": "5", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, P., & Li, C. (2006). A new Context-Sensitive ambiguous phrase segmentation Algorithm. Computer Systems & Applications, 5, 013.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Improving Chinese Word Segmentation on Micro-blog Using Rich Punctuations", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "The Proceeding of ACL '13", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, L., Li, L., He, Z., et al. (2013). Improving Chinese Word Segmentation on Micro-blog Using Rich Punctuations. The Proceeding of ACL '13.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Model of Chinese Words Rough Segmentation Based on N-Shortest-Paths Method", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Journal of Chinese information processing", |
| "volume": "5", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, H., & Liu, Q. (2002). Model of Chinese Words Rough Segmentation Based on N-Shortest-Paths Method. Journal of Chinese information processing, 5, 000.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "HHMM-based Chinese lexical analyzer ICTCLAS", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "The Proceedings of SIGHAN '03", |
| "volume": "17", |
| "issue": "", |
| "pages": "184--187", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, H., Yu, H., Xiong, D., & Liu, Q. (2003). HHMM-based Chinese lexical analyzer ICTCLAS. The Proceedings of SIGHAN '03, 17, 184-187.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "A unified character-based tagging framework for Chinese word segmentation", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "ACM Transactions on Asian Language Information Processing", |
| "volume": "9", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhao, H., Huang, C., Li, M., & Lu, B. (2010). A unified character-based tagging framework for Chinese word segmentation. ACM Transactions on Asian Language Information Processing, 9(2), 5.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Directed Acyclic GraphWang et al. (2004) proposed a Maximum No-cover Ambiguity Graph (MNAG) asFigure 4shows. Based on the principle of Choosing the Longer Word, MNAG can identify all overlapping ambiguities. This approach can reduce the number of segmentation paths. But identification of the combinational ambiguity is ignored." |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Maximum No-cover Ambiguity Graph" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Figure 6. Segmentation Matrix" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Figure 7. Maximum Matching Segmentation" |
| }, |
| "FIGREF4": { |
| "num": null, |
| "uris": null, |
| "type_str": "figure", |
| "text": "Figure 11. Distribution of Segmentation Ambiguity" |
| }, |
| "TABREF0": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"8\">A Segmentation Matrix Method for Chinese Segmentation Ambiguity Analysis</td><td>9</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>(1)</td></tr><tr><td>where</td><td/><td>is</td><td>the</td><td>relative</td><td>complement</td><td>of</td><td>in</td></tr><tr><td/><td/><td>.</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">Definition 7: In a given sentence , if an</td><td colspan=\"3\">is not addable with other OAS in ,</td></tr><tr><td>then this</td><td/><td colspan=\"6\">is a Maximum Overlapping Ambiguity String (MOAS).</td></tr><tr><td colspan=\"2\">For example,</td><td colspan=\"4\">=\"\u6e29\u67d4\u548c\u5584\u89e3\u4eba\u610f\" has three OAS:</td><td/><td>,</td><td>and</td></tr><tr><td colspan=\"5\">. All of them are addable. The result is</td><td/><td colspan=\"2\">. By Eq. (1), an overlapping</td></tr><tr><td>chain string of</td><td>is</td><td/><td/><td colspan=\"3\">. For another example,</td><td>=\"\u9010\u6e10\u53d8\u6210\u6697\u7ea2\u8272\"</td></tr><tr><td>has</td><td>and</td><td>.</td><td/><td>and</td><td colspan=\"3\">are not addable, then both of</td></tr><tr><td colspan=\"2\">them are MOAS.</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Definition 6: Given</td><td/><td>and</td><td/><td>, if</td><td/><td>, then</td></tr><tr><td>and</td><td/><td colspan=\"3\">are Addable.</td><td/><td/></tr><tr><td>If</td><td/><td>and</td><td/><td colspan=\"4\">are addable, then the sum of</td><td>and</td></tr><tr><td>is</td><td/><td/><td colspan=\"5\">. It is also an OAS. If two OAS are addable, the overlapping</td></tr><tr><td>chain string of</td><td/><td/><td/><td colspan=\"2\">can be calculated by</td><td/></tr></table>", |
| "html": null, |
| "text": "In this field, the Maximum Overlapping Ambiguity String (MOAS) is widely used. It is defined as follows." |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Algorithm 1</td></tr><tr><td>CA Identification on SM</td></tr><tr><td>Input:</td></tr><tr><td>1. SegMatrix[][], SM;</td></tr></table>", |
| "html": null, |
| "text": "a CAS. Based on Solution 1, Algorithm 1 gives the implementation of this solution 8 ." |
| }, |
| "TABREF2": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Model</td><td>Type</td><td colspan=\"2\">MOAS Count</td><td>Type</td><td>OAS</td><td>Count</td><td>Type</td><td>CAS</td><td>Count</td></tr><tr><td>BiMM</td><td colspan=\"2\">8,409</td><td>19,090</td><td/><td/><td/><td/><td/></tr><tr><td>DAG</td><td colspan=\"2\">7,369</td><td>12,337</td><td>18,888</td><td/><td>51,895</td><td>38,200</td><td colspan=\"2\">515,151</td></tr><tr><td>MNAG</td><td colspan=\"2\">7,956</td><td>13,870</td><td>7,378</td><td/><td>18,641</td><td/><td/></tr><tr><td>SM</td><td colspan=\"2\">19,269</td><td>52,072</td><td>26,580</td><td colspan=\"2\">101,514</td><td>39,310</td><td colspan=\"2\">555,574</td></tr><tr><td>Where, \"</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
| "html": null, |
| "text": "\" means that this type of ambiguity cannot be identified by the corresponding method. It can be seen that SM shows better performance. Making use of the SM method, in the rest part of this section, several issues about CSA are discussed." |
| }, |
| "TABREF3": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>CAS Inside Word</td><td>39,944</td><td>OAS in Overlapping Words</td><td>1,847,814</td></tr><tr><td>OAS Inside Word</td><td>939</td><td>OAS in Adjacent Words</td><td>1,757,756</td></tr><tr><td/><td/><td>Total OAS</td><td>1,847,814</td></tr></table>", |
| "html": null, |
| "text": "" |
| }, |
| "TABREF4": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Corpus Size</td><td>8.26 Gigabyte</td><td>Texts</td><td>52,961</td></tr><tr><td>Total Characters</td><td>2,703,512,684</td><td>Total Words</td><td>1,902,306,846</td></tr><tr><td>Token</td><td>69,087</td><td>Sentence Fragments</td><td>264,748,094</td></tr></table>", |
| "html": null, |
| "text": "" |
| }, |
| "TABREF5": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Ambiguity Type</td><td>CAS</td><td>OAS</td><td colspan=\"2\">SM-MOAS BiMM-MOAS</td></tr><tr><td>Type</td><td>39,810</td><td>732,873</td><td>1,190,606</td><td>526,251</td></tr><tr><td>Count</td><td>579,238,862</td><td>40,921,520</td><td>32,641,105</td><td>23,424,525</td></tr></table>", |
| "html": null, |
| "text": "" |
| }, |
| "TABREF6": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Ambiguity Type</td><td>CAS</td><td>OAS</td><td colspan=\"2\">SM-MOAS BiMM-MOAS</td></tr><tr><td>Type</td><td>11,672</td><td>24,069</td><td>17,868</td><td>13,5591</td></tr><tr><td>Count</td><td>61,615</td><td>81,694</td><td>47,417</td><td>18,8275</td></tr></table>", |
| "html": null, |
| "text": "" |
| }, |
| "TABREF7": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Method</td><td>Sentence</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>FMM</td><td>71,232</td><td>95.08%</td><td>92.14%</td><td>93.59%</td></tr><tr><td>BMM</td><td>71,232</td><td>95.05%</td><td>92.09%</td><td>93.58%</td></tr><tr><td>BiMM Free</td><td>56,618</td><td>96.67%</td><td>93.61%</td><td>95.12%</td></tr><tr><td>SM Free</td><td>43,444</td><td>96.74%</td><td>93.68%</td><td>95.19%</td></tr></table>", |
| "html": null, |
| "text": "" |
| }, |
| "TABREF8": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>No.</td><td>Lexicon</td><td>Entries</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td colspan=\"2\">1 Testing Words</td><td>13,148</td><td>98.95%</td><td>98.62%</td><td>98.78%</td></tr><tr><td colspan=\"2\">2 Training Words</td><td>55,303</td><td>84.34%</td><td>90.77%</td><td>87.44%</td></tr><tr><td colspan=\"2\">3 (2) + CWCC</td><td>85,486</td><td>85.32%</td><td>90.23%</td><td>87.71%</td></tr><tr><td colspan=\"2\">4 (3) + Medium Lexicon</td><td>312,065</td><td>83.42%</td><td>83.16%</td><td>83.29%</td></tr><tr><td colspan=\"2\">5 (4) + Maximum Lexicon</td><td>554,331</td><td>81.23%</td><td>80.51%</td><td>80.87%</td></tr><tr><td colspan=\"2\">6 (5) + Testing Words</td><td>555,475</td><td>89.16%</td><td>83.49%</td><td>86.23%</td></tr><tr><td colspan=\"2\">7 (2) + Testing Words</td><td>58,166</td><td>97.26%</td><td>95.93%</td><td>96.59%</td></tr></table>", |
| "html": null, |
| "text": "" |
| }, |
| "TABREF9": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>No.</td><td>Lexicon</td><td>Total</td><td>By OAS</td><td>By CAS</td><td>By OOV</td></tr><tr><td colspan=\"2\">1 Testing Words</td><td>712</td><td>355</td><td>357</td><td>0</td></tr><tr><td colspan=\"2\">2 Training Words</td><td>7,386</td><td>752</td><td>1,298</td><td>5,336</td></tr><tr><td colspan=\"2\">3 (2) + CWCC</td><td>7,130</td><td>900</td><td>1,894</td><td>4,336</td></tr><tr><td colspan=\"2\">4 (3) + Medium Lexicon</td><td>9,695</td><td>1,595</td><td>5,433</td><td>2,667</td></tr><tr><td colspan=\"2\">5 (4) + Maximum Lexicon</td><td>10,954</td><td>2,363</td><td>5,982</td><td>2,609</td></tr><tr><td colspan=\"2\">6 (5) + Testing Words</td><td>8,110</td><td>2,088</td><td>6,022</td><td>0</td></tr><tr><td colspan=\"2\">7 (2) + Testing Words</td><td>2,045</td><td>697</td><td>1,348</td><td>0</td></tr></table>", |
| "html": null, |
| "text": "" |
| }, |
| "TABREF10": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Model</td><td>OAS</td><td>CAS</td><td>OOV</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>FMM</td><td>752</td><td>1,298</td><td>5,336</td><td colspan=\"3\">84.34% 90.77% 87.44%</td></tr><tr><td>CRF</td><td>624</td><td>2,887</td><td>1,361</td><td colspan=\"3\">92.93% 91.32% 92.11%</td></tr><tr><td>SM+OOV</td><td>1,166</td><td>3,873</td><td>796</td><td colspan=\"3\">91.90% 88.84% 90.35%</td></tr><tr><td>SM+OOV+OAS</td><td>1,084</td><td>3,646</td><td>806</td><td colspan=\"3\">92.23% 89.37% 90.78%</td></tr><tr><td>SM+OOV+OAS+CAS</td><td>939</td><td>2,559</td><td>1,453</td><td colspan=\"3\">92.39% 91.14% 91.76%</td></tr></table>", |
| "html": null, |
| "text": "" |
| } |
| } |
| } |
| } |