| { |
| "paper_id": "W06-0115", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T04:05:57.399665Z" |
| }, |
| "title": "The Third International Chinese Language Processing Bakeoff: Word Segmentation and Named Entity Recognition", |
| "authors": [ |
| { |
| "first": "Gina-Anne", |
| "middle": [], |
| "last": "Levow", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Chicago", |
| "location": { |
| "addrLine": "1100 E. 58th St. Chicago", |
| "postCode": "60637", |
| "region": "IL", |
| "country": "USA" |
| } |
| }, |
| "email": "levow@cs.uchicago.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "The Third International Chinese Language Processing Bakeoff was held in Spring 2006 to assess the state of the art in two important tasks: word segmentation and named entity recognition. Twenty-nine groups submitted result sets in the two tasks across two tracks and a total of five corpora. We found strong results in both tasks as well as continuing challenges.", |
| "pdf_parse": { |
| "paper_id": "W06-0115", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "The Third International Chinese Language Processing Bakeoff was held in Spring 2006 to assess the state of the art in two important tasks: word segmentation and named entity recognition. Twenty-nine groups submitted result sets in the two tasks across two tracks and a total of five corpora. We found strong results in both tasks as well as continuing challenges.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Many important natural language processing tasks ranging from part of speech tagging to parsing to reference resolution and machine translation assume the ready availability of a tokenization into words. While such tokenization is relatively straight-forward in languages which use whitespace to delimit words, Chinese presents a significant challenge since it is typically written without such separation. Word segmentation has thus long been the focus of significant research because of its role as a necessary pre-processing phase for the tasks above. However, word segmentation remains a significant challenge both for the difficulty of the task itself and because standards for segmentation vary and human segmenters may often disagree.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "SIGHAN, the Special Interest Group for Chinese Language Processing of the Association for Computational Linguistics, conducted two prior word segmentation bakeoffs, in 2003 and 2005 (Emerson, 2005 , which established benchmarks for word segmentation against which other systems are judged. The bakeoff presentations at SIGHAN workshops highlighted new approaches in the field as well as the crucial importance of handling out-of-vocabulary (OOV) words.", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 172, |
| "text": "2003", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 173, |
| "end": 181, |
| "text": "and 2005", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 182, |
| "end": 196, |
| "text": "(Emerson, 2005", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A significant class of OOV words is Named Entities, such as person, location, and organization names. These terms are frequently poorly covered in lexical resources and change over time as new individuals, institutions, or products appear. These terms also play a particularly crucial role in information retrieval, reference resolution, and question answering. As a result of this importance, and interest in expanding the scope of the bakeoff expressed at the Fourth SIGHAN Workshop, in the Winter of 2005 it was decided to hold a new bakeoff to evaluate both continued progress in Word Segmentation (WS) and the state of the art in Chinese Named Entity Recognition (NER).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Five corpora were provided for the evaluation: three in Simplified characters and two in traditional characters. The Simplified character corpora were provided by Microsoft Research Asia (MSRA) for WS and NER, by University of Pennsylvania/University of Colorado (UPUC) for WS, and by the Linguistic Data Consortium (LDC) for NER. The Traditional character corpora were provided by City University of Hong Kong (CITYU) for WS and NER and by the Chinese Knowledge Information Processing Laboratory (CKIP) of the Academia Sinica, Taiwan for WS. Each data provider offered separate training and test corpora. General information for each corpus appears in Table 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 653, |
| "end": 660, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "All data providers were requested to supply the training and test corpora in both the standard local encoding and in Unicode (UTF-8) in a standard XML format with sentence and word tags, and named entity tags if appropriate. For", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Training ( Table 1 : Overall corpus statistics all providers except the LDC, missing encodings were transcoded by the organizers using the appropriate Python CJK codecs. Primary training and truth data for word segmentation were generated by the organizers via a Python script by replacing sentence end tags with newlines and word end tags with a single whitespace character, deleting all other tags and associated newlines. For test data, end of sentence tags were replaced with newlines and all other tags removed. Since the UPUC truth corpus was only provided in white-space separated form, test data was created by automatically deleting line-internal whitespace.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 18, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Encodings", |
| "sec_num": null |
| }, |
| { |
| "text": "Primary training and truth data for named entity recognition were converted from the provided XML format to a two-column format similar to that used in the CoNLL 2002 NER task (Sang, 2002) adapted for Chinese, where the first column is the current character and the second column the corresponding tag. Format details may be found at the bakeoff website (http://www.sighan.org/bakeoff2006/). For consistency, we tagged only \"<NAMEX>\" mentions, of either (PER)SON, (LOC)ATION, (ORG)ANIZATION, or (G)EO-(P)OLITICAL (E)NTITY as annotated in the corpora. 1 Test was generated as above.", |
| "cite_spans": [ |
| { |
| "start": 176, |
| "end": 188, |
| "text": "(Sang, 2002)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encodings", |
| "sec_num": null |
| }, |
| { |
| "text": "The LDC required sites to download training data from their website directly in the ACE 2 evaluation format, restricted to \"NAM\" mentions. The organizers provided the sites with a Python script to convert the LDC data to the CoNLL format above, and the same script was used to create the truth data. Test data was created by splitting on newlines or Chinese period characters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encodings", |
| "sec_num": null |
| }, |
| { |
| "text": "Comparable XML format data was also provided for all corpora and both tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encodings", |
| "sec_num": null |
| }, |
| { |
| "text": "The segmentation and NER annotation standard, as appropriate, for each corpus was made available on the bakeoff website. As observed in previous evaluations, these documents varied widely in length, detail, and presentation language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encodings", |
| "sec_num": null |
| }, |
| { |
| "text": "Except as noted above, no additional changes were made to the data furnished by the providers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Encodings", |
| "sec_num": null |
| }, |
| { |
| "text": "The Third Bakeoff followed the structure of the first two word segmentation bakeoffs. Participating groups (\"sites\") registered by email form; only the primary contact was required to register, identifying the corpora and tasks of interest. Training data was released for download from the websites (both SIGHAN and LDC) on April 17, 2006. Test data was released on May 15, 2006 and results were due 14:00 GMT on May 17. Scores for all submitted runs were emailed to the individual groups by May 19, and were made available to all groups on a web page a few days later.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rules and Procedures", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Groups could participate in either or both of two tracks for each task and corpus:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rules and Procedures", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2022 In the open track, participants could use any external data they chose in addition to the provided training data. Such data could include external lexica, name lists, gazetteers, part-of-speech taggers, etc. Groups were required to specify this information in their system descriptions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rules and Procedures", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2022 In the closed track, participants could only use information found in the provided training data. Information such as externally obtained word counts, part of speech information, or name lists was excluded.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rules and Procedures", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Groups were required to submit fully automatic runs and were prohibited from testing on corpora which they had previously used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rules and Procedures", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Scoring was performed automatically using a combination of Python and Perl scripts, facilitated by stringent file naming conventions. In cases where naming errors or minor divergences from required file formats arose, a mix of manual intervention and automatic conversion was employed to enable scoring. The primary scoring scripts were made available to participants for followup experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rules and Procedures", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "A total of 36 sites registered, and 29 submitted results for scoring. The greatest number of participants came from the People's Republic of China (11), followed by Taiwan 7, the United States (5), Japan (2), with one team each from Singapore, Korea, Hong Kong, and Canada. A summary of participating groups with task and track information appears in Table 2 . A total of 144 official runs were scored: 101 for word segmentation and 43 for named entity recognition.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 351, |
| "end": 358, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Participating Sites", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We report results below first for word segmentation and second for named entity recognition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results & Discussion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To provide a basis for comparison, we computed baseline and possible topline scores for each of the corpora. The baseline was constructed by leftto-right maximal match implemented by Python script, using the training corpus vocabulary. The topline employed the same procedure, but instead used the test vocabulary. These results are shown in Tables 3 and 4. For the WS task, we computed the following measures using the score (Sproat and Emerson, 2003) program developed for the previous bakeoffs: recall (R), precision (P), equally weighted F-measure (F = 2P R (P +R) ), the rate of out-ofvocabulary words (OOV rate) in the test corpus, the recall on OOV (R oov ), and recall on invocabulary words (R iv ). In and out of vocabulary status are defined relative to the training corpus. Following previous bakeoffs, we employ the Central Limit Theorem for Bernoulli trials (Grinstead and Snell, 1997) to compute 95% confidence interval as \u00b12 ( p(1\u2212p) n ), assuming the binomial distribution is appropriate. For recall, C r , we assume that recall represents the probability of correct word identification. Symmetrically, for precision, we compute C p , setting p to the precision value. One can determine if two systems may then be viewed as significantly different at a 95% confidence level by computing whether their confidence intervals overlap.", |
| "cite_spans": [ |
| { |
| "start": 426, |
| "end": 452, |
| "text": "(Sproat and Emerson, 2003)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 342, |
| "end": 357, |
| "text": "Tables 3 and 4.", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Segmentation Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Word segmentation results for all runs grouped by corpus and track appear in Tables 5-12 ; all tables are sorted by F-score.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 77, |
| "end": 88, |
| "text": "Tables 5-12", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Segmentation Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Across all corpora, the best F-score was achieved in the MSRA Open Track at 0.979. Overall, as would be expected, the best results on Open track runs had higher F-scores than the best results for Closed Track runs on the same corpora. Likewise, the OOV recall rates for the best Open Track systems exceed those of the best Closed Track runs on comparable corpora by exploiting outside information. Unfortunately, few sites submitted runs in both conditions making strong direct comparisons difficult.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Segmentation Discussion", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Many systems strongly outperformed the baseline runs, though none achieved the topline. The closest approach to the topline score was on the CITYU corpus, with the best performing runs achieving 99% of the topline F-score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Segmentation Discussion", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "It is also informative to observe the rather wide variation in scores across the test corpora. The maximum scores were achieved on the MSRA corpus closely followed by the CITYU corpus. The best score achieved on the UPUC Open track condition, however, was lower than all scores but one on the MSRA Open track. However, a comparison of the baseline, topline, and especially the OOV rates may shed some light on this disparity. The UPUC training corpus was only about one-third the size of the MSRA corpus, and the OOV rate for UPUC was more than double that of any of the other corpora, yielding a challenging task, especially in the Closed track. This high OOV rate may also be attributed to a change in register, since the training data for UPUC had been drawn exclusively from the Chinese Treebank and the test data also included data from other newswire and broadcast news sources. In contrast, the MSRA corpus had both the highest baseline and highest topline scores, possibly indicating an easier corpus in some sense. The differences in topline also suggest a greater degree of variance in the UPUC, and in fact all other corpora, relative the MSRA corpus. These differences highlight the continuing challenges of handling out-of-vocabulary words and performing segmentation across different reg- ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Segmentation Discussion", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We employed a slightly modified version of the CoNLL 2002 scoring script to evaluate NER task submissions. For each submission, we compute overall phrase precision (P), recall(R), and balanced F-measure (F), as well as F-measure for each entity type (PER-F,ORG-F,LOC-F,GPE-F).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Named Entity Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For each corpus, we compute a baseline performance level as follows. Based on the training data, using a left-to-right pass over the test data, we assign a named entity tag to a span of characters if it was tagged with a single unique NE tag (PER/LOC/ORG/GPE) in the training data. 3 All In the case of overlapping spans, we tag the maximal span. These scores for all NER corpora are found in Table 13 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 393, |
| "end": 401, |
| "text": "Table 13", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Named Entity Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Though fewer sites participated in the NER task, performances overall were very strong, with only two runs performing below baseline. The best Fscore overall on the MSRA Open Track reached 0.912, with ten other scores for MSRA and CITYU Open Track above 0.85. Only two sites submitted runs in both Open and Closed Track conditions, and few Open Track runs were submitted at all, again limiting comparability. For the only corpus with substantial numbers of both Open and Closed Track runs, MSRA, the top three runs outperformed all Closed Track runs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Named Entity Discussion", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "System scores and baselines were much higher for the CITYU and MSRA corpora than for the LDC corpus. This disparity can, in part, also be attributed to a substantially smaller training corpus for the LDC than the other two collections. The presence of an additional category, Geo-political entity, which is potentially confused for either location or organization also enhances the difficulty of this corpus. Training requirements, variation across corpora, and most extensive tag sets will continue to raise challenges for named entity recognition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Named Entity Discussion", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Named entity recognition results for all runs grouped by corpus and track appear in Tables 14-19; all tables are sorted by F-score. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Named Entity Discussion", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "The Third SIGHAN Chinese Language Processing Bakeoff successfully brought together a collection of 29 strong research groups to assess the progress of research in two important tasks, word segmentation and named entity recognition, that in turn enable other important language processing technologies. The individual group presentations at the SIGHAN workshop detail the approaches that yielded strong performance for both tasks. Issues of out-of-vocabulary word handling, annotation consistency, character encoding and code mixing of Chinese and non-Chinese text all continue to challenge system designers and bakeoff organizers alike. In future analyses, we hope to develop additional analysis tools to better assess progress in these fundamental tasks, in a more corpus independent fashion. Microsoft Research Asia has been pursuing work along these lines focusing on improvements in F-score and OOV F-score relative to more intrinsic corpus measures, such as baselines and toplines. 5 Such developments will guide the planning of future evaluations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions & Future Directions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Finally, while word segmentation and named entity recognition are important in themselves, it is also important to assess the impact of improvements in these enabling technologies on broader downstream applications. More tightly coupled experiments that involve joint word segmentation and named entity recognition could provide insight. Integration of WS and NER with a higher level task such as parsing, reference resolution, or machine translation could allow the development of more refined, task-oriented metrics to evalu-GPE tags in the truth data mapped to LOC, since no GPE tags were present in the results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions & Future Directions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "5 Personal communication, Mu Li, Microsoft Research Asia.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions & Future Directions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "ate WS and NER and focus attention on improvements to the fundamental techniques which enhance performance on higher level tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions & Future Directions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Only the LDC provided GPE tags. 2 http://www.ldc.upenn.edu/projects/ACE", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "If the span was a single character and appeared UNtagged in the corpus, we exclude it. Longer spans are retained for tagging even if they might appear both tagged and untagged in the training corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "This result indicates a rescoring of the run below with all", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We also thank Hwee Tou Ng and Olivia Oi Yee Kwong, the co-organizers of the fifth SIGHAN workshop, in conjunction with which this bakeoff takes place. Olivia Kwong merits special thanks both for her help in co-organizing this bakeoff and in coordinating publications. Finally, we thank all the participating sites who enabled the success of this bakeoff.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The Second International Chinese Word Segmentation Bakeoff", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [ |
| "Emerson" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Emerson. 2005. The Second International Chinese Word Segmentation Bakeoff. In Proceed- ings of the Fourth SIGHAN Workshop on Chinese Language Processing, Jeju Island, Republic of Ko- rea.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Introduction to Probability", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Charles", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "Laurie" |
| ], |
| "last": "Grinstead", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Snell", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Charles M. Grinstead and J. Laurie Snell. 1997. In- troduction to Probability. American Mathematical Society, Providence, RI.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition", |
| "authors": [ |
| { |
| "first": "Erik", |
| "middle": [ |
| "F" |
| ], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Tjong Kim", |
| "middle": [], |
| "last": "Sang", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 6th Conference on Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition. In Proceedings of the 6th Conference on Natural Language Learning 2002 (CoNLL-2002).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The First International Chinese Word Segmentation Bakeoff", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Sproat", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Emerson", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the Second SIGHAN Workshop on Chinese Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Sproat and Thomas Emerson. 2003. The First International Chinese Word Segmentation Bakeoff. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, Sapporo, Japan.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF2": { |
| "text": "Participating Sites by Corpus, Task, and Track", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td colspan=\"6\">Source Recall Precision F-measure OOV Rate R oov</td><td>R iv</td></tr><tr><td colspan=\"2\">CITYU 0.930</td><td>0.882</td><td>0.906</td><td>0.040</td><td colspan=\"2\">0.009 0.969</td></tr><tr><td>CKIP</td><td>0.915</td><td>0.870</td><td>0.892</td><td>0.042</td><td colspan=\"2\">0.030 0.954</td></tr><tr><td colspan=\"2\">MSRA 0.949</td><td>0.900</td><td>0.924</td><td>0.034</td><td colspan=\"2\">0.022 0.981</td></tr><tr><td>UPUC</td><td>0.869</td><td>0.790</td><td>0.828</td><td>0.088</td><td colspan=\"2\">0.011 0.951</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "text": "Baselines: WS: Maximum match with training vocabulary Source Recall Precision F-measure OOV Rate R oov", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>R iv</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "text": "CITYU: Word Segmentation: Closed Track", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">Site RunID</td><td>R</td><td>Cr</td><td>P</td><td>Cp</td><td>F</td><td>Roov</td><td>Riv</td></tr><tr><td>20</td><td/><td colspan=\"7\">0.978 \u00b10.000625 0.977 \u00b10.000639 0.977 0.840 0.984</td></tr><tr><td>32</td><td/><td colspan=\"7\">0.979 \u00b10.000611 0.976 \u00b10.000652 0.977 0.813 0.985</td></tr><tr><td>34</td><td/><td colspan=\"7\">0.971 \u00b10.000715 0.967 \u00b10.000761 0.969 0.795 0.978</td></tr><tr><td>22</td><td/><td colspan=\"7\">0.970 \u00b10.000727 0.965 \u00b10.000783 0.967 0.761 0.979</td></tr><tr><td>2</td><td/><td colspan=\"7\">0.964 \u00b10.000794 0.964 \u00b10.000794 0.964 0.787 0.971</td></tr><tr><td>13</td><td>2</td><td colspan=\"7\">0.544 \u00b10.002123 0.549 \u00b10.002121 0.547 0.194 0.559</td></tr><tr><td>13</td><td>3</td><td colspan=\"7\">0.524 \u00b10.002129 0.503 \u00b10.002131 0.513 0.195 0.538</td></tr><tr><td>13</td><td>1</td><td colspan=\"7\">0.497 \u00b10.002131 0.467 \u00b10.002127 0.481 0.057 0.516</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "text": "CITYU: Word Segmentation: Open Track", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">Site RunID</td><td>R</td><td>Cr</td><td>P</td><td>Cp</td><td>F</td><td>Roov</td><td>Riv</td></tr><tr><td>20</td><td/><td colspan=\"7\">0.961 \u00b10.001280 0.955 \u00b10.001371 0.958 0.702 0.972</td></tr><tr><td>15</td><td>a</td><td colspan=\"7\">0.961 \u00b10.001280 0.953 \u00b10.001400 0.957 0.658 0.974</td></tr><tr><td>15</td><td>b</td><td colspan=\"7\">0.961 \u00b10.001280 0.952 \u00b10.001414 0.957 0.656 0.974</td></tr><tr><td>32</td><td/><td colspan=\"7\">0.958 \u00b10.001327 0.948 \u00b10.001468 0.953 0.646 0.972</td></tr><tr><td>26</td><td/><td colspan=\"7\">0.958 \u00b10.001327 0.941 \u00b10.001558 0.949 0.554 0.976</td></tr><tr><td>1</td><td>b</td><td colspan=\"7\">0.947 \u00b10.001482 0.943 \u00b10.001533 0.945 0.601 0.962</td></tr><tr><td>1</td><td>a</td><td colspan=\"7\">0.949 \u00b10.001455 0.940 \u00b10.001571 0.944 0.694 0.960</td></tr><tr><td>9</td><td/><td colspan=\"7\">0.951 \u00b10.001428 0.935 \u00b10.001630 0.943 0.389 0.976</td></tr><tr><td>23</td><td/><td colspan=\"7\">0.937 \u00b10.001607 0.933 \u00b10.001654 0.935 0.547 0.954</td></tr><tr><td>8</td><td/><td colspan=\"7\">0.939 \u00b10.001583 0.929 \u00b10.001699 0.934 0.606 0.954</td></tr><tr><td>4</td><td>a</td><td colspan=\"7\">0.836 \u00b10.002449 0.834 \u00b10.002461 0.835 0.521 0.849</td></tr><tr><td>4</td><td>b</td><td colspan=\"7\">0.836 \u00b10.002449 0.828 \u00b10.002496 0.832 0.590 0.847</td></tr><tr><td>13</td><td>1</td><td colspan=\"7\">0.747 \u00b10.002875 0.677 \u00b10.003093 0.710 0.036 0.778</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "text": "CKIP: Word Segmentation: Closed Track \u00b10.001232 0.955 \u00b10.001371 0.959 0.704 0.975 34 0.959 \u00b10.001311 0.949 \u00b10.001455 0.954 0.672 0.972 32 0.958 \u00b10.001327 0.948 \u00b10.001468 0.953 0.647 0.972 2 a 0.953 \u00b10.001400 0.946 \u00b10.001495 0.949 0.", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">Site RunID</td><td>R</td><td>Cr</td><td>P</td><td>Cp</td><td>F</td><td>Roov</td><td>Riv</td></tr><tr><td>20</td><td/><td colspan=\"7\">0.964 679 0.965</td></tr><tr><td>2</td><td>b</td><td colspan=\"7\">0.951 \u00b10.001428 0.944 \u00b10.001521 0.948 0.676 0.964</td></tr><tr><td>13</td><td>2</td><td colspan=\"7\">0.724 \u00b10.002956 0.668 \u00b10.003115 0.695 0.161 0.749</td></tr><tr><td>13</td><td>3</td><td colspan=\"7\">0.736 \u00b10.002915 0.653 \u00b10.003148 0.692 0.160 0.761</td></tr><tr><td>13</td><td>1</td><td colspan=\"7\">0.654 \u00b10.003146 0.573 \u00b10.003271 0.611 0.057 0.680</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td/><td/><td colspan=\"5\">: CKIP: Word Segmentation: Open Track</td></tr><tr><td colspan=\"2\">Site RunID</td><td>R</td><td>Cr</td><td>P</td><td>Cp</td><td>F</td><td>Roov</td><td>Riv</td></tr><tr><td>32</td><td/><td colspan=\"7\">0.964 \u00b10.001176 0.961 \u00b10.001222 0.963 0.612 0.976</td></tr><tr><td>26</td><td/><td colspan=\"7\">0.961 \u00b10.001222 0.953 \u00b10.001336 0.957 0.499 0.977</td></tr><tr><td>9</td><td/><td colspan=\"7\">0.959 \u00b10.001252 0.955 \u00b10.001309 0.957 0.494 0.975</td></tr><tr><td>1</td><td>a</td><td colspan=\"7\">0.955 \u00b10.001309 0.956 \u00b10.001295 0.956 0.650 0.966</td></tr><tr><td>15</td><td>d</td><td colspan=\"7\">0.953 \u00b10.001336 0.956 \u00b10.001295 0.955 0.574 0.966</td></tr><tr><td>11</td><td>a</td><td colspan=\"7\">0.955 \u00b10.001309 0.953 \u00b10.001336 0.954 0.575 0.969</td></tr><tr><td>15</td><td>b</td><td colspan=\"7\">0.952 \u00b10.001350 0.956 \u00b10.001295 0.954 0.575 0.966</td></tr><tr><td>15</td><td>c</td><td colspan=\"7\">0.949 \u00b10.001389 0.957 \u00b10.001281 0.953 0.673 0.959</td></tr><tr><td>15</td><td>a</td><td colspan=\"7\">0.949 \u00b10.001389 0.958 \u00b10.001266 0.953 0.672 0.959</td></tr><tr><td>16</td><td/><td colspan=\"7\">0.952 \u00b10.001350 0.954 \u00b10.001323 0.953 0.604 0.964</td></tr><tr><td>11</td><td>b</td><td colspan=\"7\">0.950 \u00b10.001376 0.954 \u00b10.001323 0.952 0.602 0.962</td></tr><tr><td>5</td><td/><td colspan=\"7\">0.956 \u00b10.001295 0.947 \u00b10.001414 0.951 0.493 0.972</td></tr><tr><td>1</td><td>b</td><td colspan=\"7\">0.946 \u00b10.001427 0.952 \u00b10.001350 0.949 0.568 0.959</td></tr><tr><td>18</td><td>c</td><td colspan=\"7\">0.950 \u00b10.001376 0.930 \u00b10.001611 0.940 0.272 0.974</td></tr><tr><td>30</td><td>a</td><td colspan=\"7\">0.963 \u00b10.001192 0.918 \u00b10.001732 0.940 0.175 0.991</td></tr><tr><td>18</td><td>b</td><td colspan=\"7\">0.954 \u00b10.001323 0.921 \u00b10.001703 0.937 0.163 0.981</td></tr><tr><td>8</td><td/><td colspan=\"7\">0.933 \u00b10.001578 0.942 \u00b10.001476 0.937 0.640 0.943</td></tr><tr><td>23</td><td/><td colspan=\"7\">0.933 \u00b10.001578 0.939 \u00b10.001511 0.936 0.526 0.948</td></tr><tr><td>24</td><td/><td colspan=\"7\">0.923 \u00b10.001683 0.929 \u00b10.001621 0.926 0.554 0.936</td></tr><tr><td>18</td><td>a</td><td colspan=\"7\">0.949 \u00b10.001389 0.897 \u00b10.001919 0.922 0.022 0.982</td></tr><tr><td>4</td><td>a</td><td colspan=\"7\">0.830 \u00b10.002371 0.832 \u00b10.002360 0.831 0.473 0.842</td></tr><tr><td>4</td><td>b</td><td colspan=\"7\">0.817 \u00b10.002441 0.821 \u00b10.002420 0.819 0.491 0.829</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF8": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td/><td/><td colspan=\"5\">: MSRA: Word Segmentation: Closed Track</td></tr><tr><td colspan=\"2\">Site RunID</td><td>R</td><td>Cr</td><td>P</td><td>Cp</td><td>F</td><td>Roov</td><td>Riv</td></tr><tr><td>11</td><td>a</td><td colspan=\"7\">0.980 \u00b10.000884 0.978 \u00b10.000926 0.979 0.839 0.985</td></tr><tr><td>11</td><td>b</td><td colspan=\"7\">0.977 \u00b10.000946 0.976 \u00b10.000966 0.977 0.840 0.982</td></tr><tr><td>14</td><td/><td colspan=\"7\">0.975 \u00b10.000986 0.976 \u00b10.000966 0.975 0.811 0.981</td></tr><tr><td>32</td><td/><td colspan=\"7\">0.977 \u00b10.000946 0.971 \u00b10.001059 0.974 0.675 0.988</td></tr><tr><td>10</td><td/><td colspan=\"7\">0.970 \u00b10.001077 0.970 \u00b10.001077 0.970 0.804 0.976</td></tr><tr><td>30</td><td>a</td><td colspan=\"7\">0.977 \u00b10.000946 0.960 \u00b10.001237 0.968 0.624 0.989</td></tr><tr><td>34</td><td/><td colspan=\"7\">0.959 \u00b10.001252 0.961 \u00b10.001222 0.960 0.711 0.968</td></tr><tr><td>2</td><td/><td colspan=\"7\">0.949 \u00b10.001389 0.954 \u00b10.001323 0.952 0.692 0.958</td></tr><tr><td>7</td><td/><td colspan=\"7\">0.953 \u00b10.001336 0.940 \u00b10.001499 0.947 0.503 0.969</td></tr><tr><td>24</td><td/><td colspan=\"7\">0.938 \u00b10.001522 0.946 \u00b10.001427 0.942 0.706 0.946</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF9": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>: MSRA: Word Segmentation: Open Track</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF10": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td/><td colspan=\"5\">: UPUC: Word Segmentation: Closed Track</td><td/></tr><tr><td>Site RunID</td><td>R</td><td>Cr</td><td>P</td><td>Cp</td><td>F</td><td>Roov</td><td>Riv</td></tr><tr><td>34</td><td colspan=\"7\">0.949 \u00b10.001118 0.939 \u00b10.001216 0.944 0.768 0.966</td></tr><tr><td>2</td><td colspan=\"7\">0.942 \u00b10.001188 0.928 \u00b10.001314 0.935 0.711 0.964</td></tr><tr><td>20</td><td colspan=\"7\">0.940 \u00b10.001207 0.927 \u00b10.001322 0.933 0.741 0.959</td></tr><tr><td>7</td><td colspan=\"7\">0.944 \u00b10.001169 0.922 \u00b10.001363 0.933 0.680 0.970</td></tr><tr><td>12</td><td colspan=\"7\">0.933 \u00b10.001271 0.916 \u00b10.001410 0.924 0.656 0.959</td></tr><tr><td>32</td><td colspan=\"7\">0.940 \u00b10.001207 0.907 \u00b10.001476 0.923 0.561 0.976</td></tr><tr><td>24</td><td colspan=\"7\">0.928 \u00b10.001314 0.906 \u00b10.001483 0.917 0.660 0.954</td></tr><tr><td>10</td><td colspan=\"7\">0.925 \u00b10.001339 0.897 \u00b10.001545 0.911 0.593 0.957</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF11": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>: UPUC: Word Segmentation: Open Track</td></tr><tr><td>isters and writing styles.</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF13": { |
| "text": "MSRA: Named Entity Recognition: Open Track", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |