ACL-OCL / Base_JSON /prefixW /json /W06 /W06-0123.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W06-0123",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:06:34.669358Z"
},
"title": "Chinese Word Segmentation and Named Entity Recognition by Character Tagging",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Kun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {
"postCode": "113-8656",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "kunyu@kc.t.u-tokyo.ac.jp"
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {
"postCode": "113-8656",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Hao",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {
"postCode": "113-8656",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "liuhao@kc.t.u-tokyo.ac.jp"
},
{
"first": "Toshiaki",
"middle": [],
"last": "Nak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {
"postCode": "113-8656",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our word segmentation system and named entity recognition (NER) system for participating in the third SIGHAN Bakeoff. Both of them are based on character tagging, but use different tag sets and different features. Evaluation results show that our word segmentation system achieved 93.3% and 94.7% F-score in UPUC and MSRA open tests, and our NER system got 70.84% and 81.32% F-score in LDC and MSRA open tests.",
"pdf_parse": {
"paper_id": "W06-0123",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our word segmentation system and named entity recognition (NER) system for participating in the third SIGHAN Bakeoff. Both of them are based on character tagging, but use different tag sets and different features. Evaluation results show that our word segmentation system achieved 93.3% and 94.7% F-score in UPUC and MSRA open tests, and our NER system got 70.84% and 81.32% F-score in LDC and MSRA open tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dealing with word segmentation as character tagging showed good results in last SIGHAN Bakeoff (J.K. Low et al.,2005) . It is good at unknown word identification, but only using character-level features sometimes makes mistakes when identifying known words (T. Nakagawa, 2004) . Researchers use word-level features (J.K. Low et al.,2005) to solve this problem. Based on this idea, we develop a word segmentation system based on character-tagging, which also combine character-level and word-level features. In addition, a character-based NER module and a rule-based factoid identification module are developed for post-processing.",
"cite_spans": [
{
"start": 101,
"end": 117,
"text": "Low et al.,2005)",
"ref_id": null
},
{
"start": 261,
"end": 276,
"text": "Nakagawa, 2004)",
"ref_id": null
},
{
"start": 321,
"end": 337,
"text": "Low et al.,2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Named entity recognition based on charactertagging has shown better accuracy than wordbased methods (H.Jing et al.,2003) . But the small window of text makes it difficult to recognize the named entities with many characters, such as organization names (H. Jing et al.,2003) . Considering about this, we developed a NER system based on character-tagging, which combines word-level and character-level features together. In addition, in-NE probability is defined in this system to remove incorrect named entities and create new named entities as post-processing.",
"cite_spans": [
{
"start": 100,
"end": 120,
"text": "(H.Jing et al.,2003)",
"ref_id": "BIBREF1"
},
{
"start": 256,
"end": 273,
"text": "Jing et al.,2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Tagging for Word Segmentation and NER",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character",
"sec_num": "2"
},
{
"text": "We look both word segmentation and NER as character tagging, which is to find the tag sequence T* with the highest probability given a sequence of characters S=c 1 c 2 \u2026c n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Model",
"sec_num": "2.1"
},
{
"text": ") | ( max arg * S T P T T = (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Model",
"sec_num": "2.1"
},
{
"text": "Then we assume that the tagging of one character is independent of each other, and modify formula 1 as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Model",
"sec_num": "2.1"
},
{
"text": "\u220f = = = = = n i i i t t t T n n t t t T c t P c c c t t t P T n n 1 ... 2 1 2 1 ... * ) | ( max arg ) ... | ... ( max arg 2 1 2 1 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Model",
"sec_num": "2.1"
},
{
"text": "Beam search (n=3) (Ratnaparkhi,1996) is applied for tag sequence searching, but we only search the valid sequences to ensure the validity of searching result. SVM is selected as the basic classification model for tagging because of its robustness to over-fitting and high performance (Sebastiani, 2002) . To simplify the calculation, the output of SVM is regarded as P(t i |c i ).",
"cite_spans": [
{
"start": 18,
"end": 36,
"text": "(Ratnaparkhi,1996)",
"ref_id": null
},
{
"start": 284,
"end": 302,
"text": "(Sebastiani, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Model",
"sec_num": "2.1"
},
{
"text": "Four tags 'B, I, E, S' are defined for the word segmentation system, in which 'B' means the character is the beginning of one word, 'I' means the character is inside one word, 'E' means the character is at the end of one word and 'S' means the character is one word by itself. For the NER system, different tag sets are defined for different corpuses. Table 1 shows the tag set defined for MSRA corpus. It is the product of Segment-Tag set and NE-Tag set, because not only named entities but also words are segmented in this corpus. Here NE-Tag 'O' means the character does not belong to any named entities. For LDC corpus, because there is no segmentation information, we delete NE-Tag 'O' but add tag 'NONE' to indicate the character does not belong to any named entities (Table 2) . ",
"cite_spans": [],
"ref_spans": [
{
"start": 352,
"end": 359,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 774,
"end": 783,
"text": "(Table 2)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Tag Definition",
"sec_num": "2.2"
},
{
"text": "First, some features based on characters are defined for the two tasks, which are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Definition",
"sec_num": "2.3"
},
{
"text": "(a) C n (n=-2,-1,0,1,2) (b) Pu(C 0 ) Feature C n (n=-2,-1,0,1,2) mean the Chinese characters appearing in different positions (the current character and two characters to its left and right), and they are binary features. A character list, which contains all the characters in the lexicon introduced later, is used to identify them. Besides of that, feature Pu(C 0 ) means whether C 0 is in a punctuation character list. It is also binary feature and all the punctuations in the punctuation character list come from Penn Chinese Treebank 5.1 (N. Xue et al.,2002) .",
"cite_spans": [
{
"start": 546,
"end": 562,
"text": "Xue et al.,2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Definition",
"sec_num": "2.3"
},
{
"text": "In addition, we define some word-level features based on a lexicon to enlarge the window size of text in the two tasks, which are: (c) W n (n=-1,0,1) Feature W n (n=-1,0,1) mean the lexicon words in different positions (the word containing C 0 and one word to its left and right) and they are also binary features. We select all the possible words in the lexicon that satisfy the requirements, not like only selecting the longest one in (J.K. Low et al.,2005) . To create the lexicon, we use following steps. First, a lexicon from NICT (National Institute of Information and Communications Technology, Japan) is used as the basic lexicon, which is extracted from Peking University Corpus of the second SIGHAN Bakeoff (T. Emerson, 2005) , Penn Chinese Treebank 4.0 (N. Xue et al.,2002) , a Chinese-to-English Wordlist 1 and part of NICT corpus (K. Uchimoto et al.,2004; Y.J.Zhang et al.,2005) . Then, all the words containing digits and letters are removed 1 http://projects.ldc.upenn.edu/Chinese/ from this lexicon. At last, all the punctuations in Penn Chinese Treebank 5.1 (N. Xue et al.,2002) and all the words in the training data of UPUC and MSRA corpuses are added into the lexicon.",
"cite_spans": [
{
"start": 443,
"end": 459,
"text": "Low et al.,2005)",
"ref_id": null
},
{
"start": 721,
"end": 735,
"text": "Emerson, 2005)",
"ref_id": null
},
{
"start": 768,
"end": 784,
"text": "Xue et al.,2002)",
"ref_id": null
},
{
"start": 847,
"end": 868,
"text": "Uchimoto et al.,2004;",
"ref_id": null
},
{
"start": 869,
"end": 891,
"text": "Y.J.Zhang et al.,2005)",
"ref_id": "BIBREF10"
},
{
"start": 1079,
"end": 1095,
"text": "Xue et al.,2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Definition",
"sec_num": "2.3"
},
{
"text": "Besides of above features, some extra features are defined only for NER task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Definition",
"sec_num": "2.3"
},
{
"text": "First, we add some character-based features to improve the accuracy of person name recognition, which are CN n (n=-2,-1,0,1,2). They mean whether C n (n=-2,-1,0,1,2) belong to a Chinese surname list. All of them are binary features. The Chinese surname list contains the most famous 100 Chinese surnames, such as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Definition",
"sec_num": "2.3"
},
{
"text": "\u8d75 , \u94b1 , \u5b59 , \u674e (Zhao, Qian, Sun, Li).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Definition",
"sec_num": "2.3"
},
{
"text": "Then, we add some word-based features to help identify the organization name, which are WORG n (n=-1,0,1). They mean whether W n (n= -1,0,1) belong to an organization suffix list. All of them are also binary features. The organization suffix list is created by extracting the last word from all the organization names in the training data of both MSRA and LDC corpuses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Definition",
"sec_num": "2.3"
},
{
"text": "Besides of the basic model, a NER module and a factoid identification module are developed in our word segmentation system for postprocessing. In addition, we define in-NE probability to delete the incorrect named entities and identify new named entities in the postprocessing phrase of our NER system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing",
"sec_num": "3"
},
{
"text": "In this module, if two or more segments in the outputs of basic model are recognized as one named entity, we combine them as one segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition for Word Segmentation",
"sec_num": "3.1"
},
{
"text": "This module uses the same basic NER model as what we introduced in the previous section. But it only identifies person and location names, because organization names often contain more than one word. In addition, to keep the high accuracy of person name recognition, the features about organization suffixes are not used here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Recognition for Word Segmentation",
"sec_num": "3.1"
},
{
"text": "Rules are used to identify the following factoids among the segments from the basic word segmentation model: NUMBER: Integer, decimal, Chinese number PERCENT: Percentage and fraction DATE: Date FOREIGN: English words Table 3 shows some rules defined here. ",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 224,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Factoid Identification for Word Segmentation",
"sec_num": "3.2"
},
{
"text": "In-word probability has been used in unknown word identification successfully (H.Q. Li et al., 2004) . Accordingly, we define in-NE probability to help delete and create named entities (NE).",
"cite_spans": [
{
"start": 84,
"end": 100,
"text": "Li et al., 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER Deletion and Creation",
"sec_num": "3.3"
},
{
"text": "Formula 3 shows the definition of in-NE probability for character sequence c i c i+1 \u2026c i+n . Here '# of c i c i+1 \u2026c i+n as NE' is defined as Time InNE and the occurrence of c i c i+1 \u2026c i+n in different type of NE is treated differently. (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER Deletion and Creation",
"sec_num": "3.3"
},
{
"text": "data in testing ... of # NE as ... of # ) ... ( 1 1 1 n i i i n i i i n i i i InNE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER Deletion and Creation",
"sec_num": "3.3"
},
{
"text": "Then, we use some criteria to delete the incorrect NE and create new possible NE, in which different thresholds are set for different tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER Deletion and Creation",
"sec_num": "3.3"
},
{
"text": "Criterion 1: If P InNE (c i c i+1 \u2026c i+n ) of one NE type is lower than T Del , and Time InNE (c i c i+1 \u2026c i+n ) of the same NE type is also lower than T Time , then delete this type of NE composed of c i c i+1 \u2026c i+n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER Deletion and Creation",
"sec_num": "3.3"
},
{
"text": "Criterion 2: If P InNE (c i c i+1 \u2026c i+n ) of one NE type is higher than T Cre , and in other places the character sequence c i c i+1 \u2026c i+n does not belong to any NE, then create a new NE containing c i c i+1 \u2026c i+n with this NE type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER Deletion and Creation",
"sec_num": "3.3"
},
{
"text": "SVMlight (T. Joachims, 1999) was used as SVM tool. In addition, we used the MSRA training corpus of NER task in this Bakeoff to train our NER post-processing module.",
"cite_spans": [
{
"start": 13,
"end": 28,
"text": "Joachims, 1999)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Setting",
"sec_num": "4.1"
},
{
"text": "We attended the open track of word segmentation task for two corpuses: UPUC and MSRA. Table 4 shows the evaluation results. The F-score of our word segmentation system in UPUC corpus ranked 4 th (same as that of the 3 rd group) among all the 8 participants. And it was only 1.1% lower than the highest one and 0.2% lower than the second one. It showed that our character-tagging approach was feasible. But the F-score of MSRA corpus was only higher than one participant in all the 10 groups (the highest one was 97.9%). Error analysis shows that there are two main reasons.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results of Word Segmentation",
"sec_num": "4.2"
},
{
"text": "First, in MSRA corpus, they tend to segment one organization name as one word, such as \u7f8e \u56fd \u4e2d \u56fd \u5546 \u4f1a (China Chamber of Commerce in USA). But our basic segmentation model segmented such word into several words, e.g. \u7f8e \u56fd / \u4e2d \u56fd /\u5546\u4f1a(USA/China/Chamber of Commerce), and our post-processing NER module does not consider about organization names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Word Segmentation",
"sec_num": "4.2"
},
{
"text": "Second, our factoid identification rule did not combine the consequent DATE factoids into one word, but they are combined in MSRA corpus. For example, our system segmented the word \u665a \u4e0a 9 \u65f6 \u6574 (9 o'clock in the evening) into three parts \u665a \u4e0a /9 \u65f6 /\u6574 (Evening/9 o'clock/Exact). This error can be solved by revising the rules for factoid identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Word Segmentation",
"sec_num": "4.2"
},
{
"text": "Besides of that, we also found although our large lexicon helped identify the known word successfully, it also decreased the recall of OOV words (our Riv of UPUC corpus ranked 2 nd , with only 0.6% decrease than the highest one, but Roov ranked 4 th , with 8.8% decrease than the highest one). The large size of this lexicon is looked as the main reason.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Word Segmentation",
"sec_num": "4.2"
},
{
"text": "Our lexicon contains 221,407 words, in which 6,400 words are single-character words. It made our system easy to segment one word into several words, for example word \u7ecf \u6d4e \u7ec4 (Economy Group) in UPUC corpus was segmented into \u7ecf \u6d4e (Economy) and \u7ec4 (Group). Moreover, the large size of this lexicon also brought errors of combining two words into one word if the word was in the lexicon. For example, words \u53ea (Only) and \u6709 (Have) in MSRA corpus were identified as one word because there existed the word \u53ea \u6709 (Only) in our lexicon. We will reduce our lexicon to a reasonable size to solve these problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Word Segmentation",
"sec_num": "4.2"
},
{
"text": "We also attended the open track of NER task for both LDC corpus and MSRA corpus. Table 5 and Table 6 give the evaluation results.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 93,
"end": 100,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of NER",
"sec_num": "4.3"
},
{
"text": "There were only 3 participants in the open track of LDC corpus and our group got the best F-score. In addition, among all the 11 participants for MSRA corpus, our system ranked 6 th by F-score. It showed the validity of our character-tagging method for NER. But for location name (LOC) in LDC corpus, both the precision and recall of our NER system were very low. It was because there were too few location names in the training data (there were only 476 LOC in the training data, but 5648 PER, 5190 ORG and 9545 GPE in the same data set). Besides of that, error analysis shows there are four types of main errors in the NER results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of NER",
"sec_num": "4.3"
},
{
"text": "First, some organization names were very long and can be divided into several words, in which parts of them can also be looked as named entities. In such case, our system only recognized the small parts as named entities. For example, In addition, our system was not good at recognizing foreign person names, such as \u8d56 \u5c14 \u767b (Riordan), and abbreviations, such as \u6d1b \u5e02 (Los Angeles), if they seldom or never appeared in training corpus. It is because the use of the large lexicon decreased the unknown word identification ability of our NER system simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of NER",
"sec_num": "4.3"
},
{
"text": "\u54c8 \u4f5b \u5927 \u5b66 \u8d39 \u6b63 \u6e05 \u4e1c \u4e9a \u7814 \u7a76 \u4e2d \u5fc3 (Fei",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of NER",
"sec_num": "4.3"
},
{
"text": "Third, the in-NE probability used in postprocessing is helpful to identify named entities which cannot be recognized by the basic model. But it also recognized some words which can only be regarded as named entities in the local context incorrectly. For example, our system recognized \u5357 \u4eac (Najing) as GPE in \u9001 \u5230 \u5357 \u4eac \u533b \u6cbb (Send to Najing for remedy) in LDC corpus. We will consider about adding the in-NE probability as one feature into the basic model to solve this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of NER",
"sec_num": "4.3"
},
{
"text": "At last, in LDC corpus, they combine the attributive of one named entity (especially person and organization names) with the named entity together. But our system only recognized the named entity by itself. For example, our system only recognized \u5218 \u6842 \u82b3 (Liu Gui Fang) as PER in the reference person name \u4e0d \u77e5 \u5185 \u60c5 \u7684 \u5218 \u6842 \u82b3 (Liu Gui Fang who does not know the inside).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of NER",
"sec_num": "4.3"
},
{
"text": "Through the participation of the third SIGHAN Bakeoff, we found that tagging characters with both character-level and word-level features was effective for both word segmentation and NER. While, this work is only our preliminary attempt and there are still many works needed to do in the future, such as the control of lexicon size, the use of extra knowledge (e.g. pos-tag), the feature definition, and so on. In addition, our word segmentation system only combined the NER module as postprocessing, which resulted in that lots of information from NER module cannot be used by the basic model. We will consider about combining the NER and factoid identification modules into the basic word segmentation model by defining new tag sets in our future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We would like to thank Dr. Kiyotaka Uchimoto for providing the NICT lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Second International Chinese Word Segmentation Bakeoff",
"authors": [
{
"first": "T",
"middle": [],
"last": "Emerson",
"suffix": ""
}
],
"year": 2005,
"venue": "the 4 th SIGHAN Workshop",
"volume": "",
"issue": "",
"pages": "123--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T.Emerson. 2005. The Second International Chinese Word Seg- mentation Bakeoff. In the 4 th SIGHAN Workshop. pp. 123-133.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "HowtogetaChineseName(Entity): Segmentation and Combination Issues",
"authors": [
{
"first": "H",
"middle": [],
"last": "Jing",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "200--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H.Jing et al. 2003. HowtogetaChineseName(Entity): Segmentation and Combination Issues. In EMNLP 2003. pp. 200-207.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Making large-scale SVM learning practical",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T.Joachims. 1999. Making large-scale SVM learning practical. Advances in Kernel Methods -Support Vector Learning. MIT- Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Use of SVM for Chinese New Word Identification",
"authors": [
{
"first": "H",
"middle": [
"Q"
],
"last": "Li",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "723--732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H.Q.Li et al. 2004. The Use of SVM for Chinese New Word Identi- fication. In IJCNLP 2004. pp. 723-732.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Maximum Entropy Approach to Chinese Word Segmentation",
"authors": [
{
"first": "J",
"middle": [
"K"
],
"last": "Low",
"suffix": ""
}
],
"year": 2005,
"venue": "the 4 th SIGHAN Workshop",
"volume": "",
"issue": "",
"pages": "161--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.K.Low et al. 2005. A Maximum Entropy Approach to Chinese Word Segmentation. In the 4 th SIGHAN Workshop. pp. 161-164.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Chinese and Japanese Word Segmentation Using Word-level and Character-level Information",
"authors": [
{
"first": "T",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "466--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T.Nakagawa. 2004. Chinese and Japanese Word Segmentation Using Word-level and Character-level Information. In COLING 2004. pp. 466-472.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Maximum Entropy Model for Part-of-Speech Tagging",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1996,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.Ratnaparkhi. 1996. A Maximum Entropy Model for Part-of- Speech Tagging. In EMNLP 1996.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Machine learning in automated text categorization",
"authors": [
{
"first": "F",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Computing Surveys",
"volume": "34",
"issue": "1",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F.Sebastiani. 2002. Machine learning in automated text categoriza- tion. ACM Computing Surveys. 34(1): 1-47.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multilingual Aligned Parallel Treebank Corpus Reflecting Contextual Information and its Applications",
"authors": [
{
"first": "K",
"middle": [],
"last": "Uchimoto",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the MLR",
"volume": "",
"issue": "",
"pages": "63--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K.Uchimoto et al. 2004. Multilingual Aligned Parallel Treebank Corpus Reflecting Contextual Information and its Applications. In Proceedings of the MLR 2004. pp. 63-70.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Building a Large-Scale Annotated Chinese Corpus",
"authors": [
{
"first": "N",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N.Xue et al. 2002. Building a Large-Scale Annotated Chinese Cor- pus. In COLING 2002.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Building an Annotated Japanese-Chinese Parallel Corpus -A part of NICT Multilingual Corpora",
"authors": [
{
"first": "Y",
"middle": [
"J"
],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the MT SummitX",
"volume": "",
"issue": "",
"pages": "71--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y.J.Zhang et al. 2005. Building an Annotated Japanese-Chinese Parallel Corpus -A part of NICT Multilingual Corpora. In Pro- ceedings of the MT SummitX. pp. 71-78.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"content": "<table><tr><td colspan=\"3\">Tags of NER for MSRA corpus</td></tr><tr><td>Segment-Tag B, I, E, S</td><td>\u00d7</td><td>NE-Tag PER, LOC, ORG, O</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table><tr><td/><td/><td>Tags of NER for LDC corpus</td><td/><td/></tr><tr><td>Segment Tag B, I, E, S</td><td>\u00d7</td><td>NE Tag PER, LOC, ORG, GPE</td><td>+</td><td>NONE</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">If previous segment is composed of DIGIT and</td></tr><tr><td>DATE</td><td>current segment is in the list of '\u5e74, (Year, Month, Day, Day)', then combine them. \u6708 , \u65e5 ,</td><td>\u53f7</td></tr><tr><td>FOREIGN</td><td>Combine the consequent letters as one segment.</td><td/></tr><tr><td colspan=\"2\">(DIGIT means both Arabic and Chinese numerals)</td><td/></tr></table>",
"type_str": "table",
"text": "Some Rules for Factoid Identification Factoid Rule NUMBER If previous segment ends with DIGIT and current segment starts with DIGIT, then combine them.PERCENT If previous segment is composed of DIGIT and current segment equals '%', then combine them.",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table><tr><td colspan=\"6\">Results of Word Segmentation Task (in percentage %)</td></tr><tr><td>Corpus</td><td>Pre.</td><td>Rec.</td><td>F-score</td><td>Roov</td><td>Riv</td></tr><tr><td>UPUC</td><td>94.4</td><td>92.2</td><td>93.3</td><td>68.0</td><td>97.0</td></tr><tr><td>MSRA</td><td>94.0</td><td>95.3</td><td>94.7</td><td>50.3</td><td>96.9</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table><tr><td colspan=\"7\">Results of NER Task for LDC corpus (in percentage %)</td></tr><tr><td/><td/><td>PER</td><td/><td>LOC</td><td>ORG</td><td>GPE</td><td>Overall</td></tr><tr><td>Pre.</td><td colspan=\"2\">83.29</td><td colspan=\"2\">58.52</td><td>61.48</td><td>78.66</td><td>76.16</td></tr><tr><td>Rec.</td><td colspan=\"2\">66.93</td><td colspan=\"2\">18.87</td><td>45.19</td><td>79.94</td><td>66.21</td></tr><tr><td>F-score</td><td colspan=\"2\">74.22</td><td colspan=\"2\">28.57</td><td>52.09</td><td>79.30</td><td>70.84</td></tr><tr><td colspan=\"7\">Table 6 Results of NER Task for MSRA corpus (in percentage %)</td></tr><tr><td/><td/><td>PER</td><td/><td colspan=\"2\">LOC</td><td>ORG</td><td>Overall</td></tr><tr><td>Pre.</td><td/><td colspan=\"2\">90.76</td><td colspan=\"2\">85.62</td><td>73.90</td><td>84.68</td></tr><tr><td>Rec.</td><td/><td colspan=\"2\">76.13</td><td colspan=\"2\">85.41</td><td>65.74</td><td>78.22</td></tr><tr><td colspan=\"2\">F-score</td><td colspan=\"2\">82.80</td><td colspan=\"2\">85.52</td><td>69.58</td><td>81.32</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
}
}
}
}