| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:47:38.339929Z" |
| }, |
| "title": "Overview of NLPTEA-2020 Shared Task for Chinese Grammatical Error Diagnosis", |
| "authors": [ |
| { |
| "first": "Gaoqi", |
| "middle": [], |
| "last": "Rao", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Beijing Language and Culture University", |
| "location": {} |
| }, |
| "email": "raogaoqi@blcu.edu.cn" |
| }, |
| { |
| "first": "Erhong", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Beijing Language and Culture University", |
| "location": {} |
| }, |
| "email": "yangerhong@blcu.edu.cn" |
| }, |
| { |
| "first": "Baolin", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Beijing Language and Culture University", |
| "location": {} |
| }, |
| "email": "zhangbaolin@blcu.edu.cn" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper presents the NLPTEA 2020 shared task for Chinese Grammatical Error Diagnosis (CGED) which seeks to identify grammatical error types, their range of occurrence and recommended corrections within sentences written by learners of Chinese as a foreign language. We describe the task definition, data preparation, performance metrics, and evaluation results. Of the 30 teams registered for this shared task, 17 teams developed the system and submitted a total of 43 runs. System performances achieved a significant progress, reaching F1 of 91% in detection level, 40% in position level and 28% in correction level. All data sets with gold standards and scoring scripts are made publicly available to researchers.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper presents the NLPTEA 2020 shared task for Chinese Grammatical Error Diagnosis (CGED) which seeks to identify grammatical error types, their range of occurrence and recommended corrections within sentences written by learners of Chinese as a foreign language. We describe the task definition, data preparation, performance metrics, and evaluation results. Of the 30 teams registered for this shared task, 17 teams developed the system and submitted a total of 43 runs. System performances achieved a significant progress, reaching F1 of 91% in detection level, 40% in position level and 28% in correction level. All data sets with gold standards and scoring scripts are made publicly available to researchers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Automated grammar checking for learners of English as a foreign language has achieved obvious progress. Helping Our Own (HOO) is a series of shared tasks in correcting textual errors (Dale and Kilgarriff, 2011; Dale et al., 2012) . The shared tasks at CoNLL 2013 and 2014 focused on grammatical error correction, increasing the visibility of educational application research in the NLP community (Ng et al., 2013; .", |
| "cite_spans": [ |
| { |
| "start": 183, |
| "end": 210, |
| "text": "(Dale and Kilgarriff, 2011;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 211, |
| "end": 229, |
| "text": "Dale et al., 2012)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 396, |
| "end": 413, |
| "text": "(Ng et al., 2013;", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Many of these learning technologies focus on learners of English as a Foreign Language (EFL), while relatively few grammar checking applications have been developed to support Chinese as a Foreign Language (CFL) learners. Those applications which do exist rely on a range of techniques, such as statistical learning (Chang et al, 2012; Wu et al, 2010; Yu and Chen, 2012) , rule-based analysis (Lee et al., 2013) , neuro network modelling (Zheng et al., 2016; Fu et al., 2018) and hybrid methods Zhou et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 316, |
| "end": 335, |
| "text": "(Chang et al, 2012;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 336, |
| "end": 351, |
| "text": "Wu et al, 2010;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 352, |
| "end": 370, |
| "text": "Yu and Chen, 2012)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 393, |
| "end": 411, |
| "text": "(Lee et al., 2013)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 438, |
| "end": 458, |
| "text": "(Zheng et al., 2016;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 459, |
| "end": 475, |
| "text": "Fu et al., 2018)", |
| "ref_id": null |
| }, |
| { |
| "start": 495, |
| "end": 513, |
| "text": "Zhou et al., 2017)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In response to the limited availability of CFL learner data for machine learning and linguistic analysis, the ICCE-2014 workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA) organized a shared task on diagnosing grammatical errors for CFL . A second version of this shared task in NLP-TEA was collocated with the ACL-IJCNLP-2015 (Lee et al., 2015) , COLING-2016 . Its name was fixed from then on: Chinese Grammatical Error Diagnosis (CGED). As a part of IJCNLP 2017, the shared task was organized (Rao et al., 2017) . In conjunction with NLP-TEA workshop in ACL 2018, CGED was organized again (Rao et al., 2018) . The main purpose of these shared tasks is to provide a common setting so that researchers who approach the tasks using different linguistic factors and computational techniques can compare their results. Such technical evaluations allow researchers to exchange their experiences to advance the field and eventually develop optimal solutions to this shared task.", |
| "cite_spans": [ |
| { |
| "start": 365, |
| "end": 383, |
| "text": "(Lee et al., 2015)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 386, |
| "end": 397, |
| "text": "COLING-2016", |
| "ref_id": null |
| }, |
| { |
| "start": 533, |
| "end": 551, |
| "text": "(Rao et al., 2017)", |
| "ref_id": null |
| }, |
| { |
| "start": 629, |
| "end": 647, |
| "text": "(Rao et al., 2018)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of this paper is organized as follows. Section 2 describes the task in detail. Section 3 introduces the constructed data sets. Section 4 proposes evaluation metrics. Section 5 reports the results of the participants' approaches. Conclusions are finally drawn in Section 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The goal of this shared task is to develop NLP techniques to automatically diagnose (and furtherly correct) grammatical errors in Chinese sentences written by CFL learners. Such errors are defined as PADS: redundant words (denoted as a capital \"R\"), missing words (\"M\"), word selection errors (\"S\"), and word ordering errors (\"W\"). The input sentence may contain one or more such errors. The developed system should indicate which error types are embedded in the given unit (containing 1 to 5 sentences) and the position at which they occur. Each input unit is given a unique number \"sid\". If the inputs contain no grammatical errors, the system should return: \"sid, correct\". If an input unit contains the grammatical errors, the output format should include four items \"sid, start_off, end_off, error_type\", where start_off and end_off respectively denote the positions of starting and ending character at which the grammatical error occurs, and error_type should be one of the defined errors: \"R\", \"M\", \"S\", and \"W\". Each character or punctuation mark occupies 1 space for counting positions. Example sentences and corresponding notes are shown as Table 1 shows. This year, we only have one track of HSK.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1151, |
| "end": 1158, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Example 1 Input: (sid=00038800481) \u6211\u6839\u672c\u4e0d\u80fd\u4e86\u89e3\u8fd9\u5987\u5973\u8f9e\u804c\u56de\u5bb6\u7684\u73b0\u8c61\u3002\u5728\u8fd9\u4e2a\u65f6\u4ee3\uff0c\u4e3a\u4ec0\u4e48\u653e\u5f03\u81ea\u5df1\u7684\u5de5\u4f5c\uff0c\u5c31 \u56de\u5bb6\u5f53\u5bb6\u5ead\u4e3b\u5987\uff1f Output: 00038800481, 6, 7, S 00038800481, 8, 8, R (Notes: \"\u4e86\u89e3\"should be \"\u7406\u89e3\". In addition, \"\u8fd9\" is a redundant word.) Example 2 Input: (sid=00038800464)\u6211\u771f\u4e0d\u660e\u767d\u3002\u5979\u4eec\u53ef\u80fd\u662f\u8ffd\u6c42\u4e00\u4e9b\u524d\u4ee3\u7684\u6d6a\u6f2b\u3002 Output: 00038800464, correct Example 3 Input: (sid=00038801261)\u4eba\u6218\u80dc\u4e86\u9965\u997f\uff0c\u624d\u52aa\u529b\u4e3a\u4e86\u4e0b\u4e00\u4ee3\u4f5c\u66f4\u597d\u7684\u3001\u66f4\u5065\u5eb7\u7684\u4e1c\u897f\u3002 Output: 00038801261, 9, 9, M 00038801261, 16, 16, S (Notes: \"\u80fd\" is missing. The word \"\u4f5c\"should be \"\u505a\". The correct sentence is \"\u624d\u80fd\u52aa\u529b\u4e3a\u4e86\u4e0b\u4e00\u4ee3\u505a\u66f4\u597d\u7684\") Example 4 Input: (sid=00038801320)\u9965\u997f\u7684\u95ee\u9898\u4e5f\u662f\u5e94\u8be5\u89e3\u51b3\u7684\u3002\u4e16\u754c\u4e0a\u6bcf\u5929\u7531\u4e8e\u9965\u997f\u5f88\u591a\u4eba\u6b7b\u4ea1\u3002 Output: 00038801320, 19, 25, W (Notes: \"\u7531\u4e8e\u9965\u997f\u5f88\u591a\u4eba\" should be \"\u5f88\u591a\u4eba\u7531\u4e8e\u9965\u997f\") ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hanyu Shuiping Kaoshi (HSK)", |
| "sec_num": null |
| }, |
| { |
| "text": "The learner corpora used in our shared task were taken from the writing section of the HSK (Pinyin of Hanyu Shuiping Kaoshi, Test of Chinese Level) (Cui et al, 2011; Zhang et al, 2013) .", |
| "cite_spans": [ |
| { |
| "start": 148, |
| "end": 165, |
| "text": "(Cui et al, 2011;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 166, |
| "end": 184, |
| "text": "Zhang et al, 2013)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Sets", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Native Chinese speakers were trained to manually annotate grammatical errors and provide corrections corresponding to each error. The data were then split into two mutually exclusive sets as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Sets", |
| "sec_num": "3" |
| }, |
| { |
| "text": "(1) Training Set: All units in this set were used to train the grammatical error diagnostic systems. Each unit contains 1 to 5 sentences with annotated grammatical errors and their corresponding corrections. All units are represented in SGML format, as shown in Table 2 . We provide 1129 training units with a total of 2,909 grammatical errors, categorized as redundant (678 instances), missing (801), word selection (1228) and word ordering (201).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 262, |
| "end": 270, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Sets", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In addition to the data sets provided, participating research teams were allowed to use other public data for system development and implementation. Use of other data should be specified in the final system report. Test Set: This set consists of testing units used for evaluating system performance. Table 3 shows statistics for the testing set for this year. According to the sampling in the writing sessions in HSK, over 40% of the sentences contain no error. This was simulated in the test set, in order to test the performance of the systems in false positive identification. The distributions of error types (Table 4) are similar with that of the training set. The proportion of the correct sentences is sampled from data of the online Dynamic Corpus of HSK 1 . Table 5 shows the confusion matrix used for evaluating system performance. In this matrix, TP (True Positive) is the number of sentences with grammatical errors are correctly identified by the developed system; FP (False Positive) is the number of sentences in which non-existent grammatical errors are identified as errors; TN (True Negative) is the number of sentences without grammatical errors that are correctly identified as such; FN (False Negative) is the number of sentences with grammatical errors which the system incorrectly identifies as being correct.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 300, |
| "end": 307, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 613, |
| "end": 622, |
| "text": "(Table 4)", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 767, |
| "end": 774, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Sets", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "#R 769 (21.05%) #M 864 (23.65%) #S 1694 (46.36%) #W 327 (8.95%) #Error 3,654", |
| "eq_num": "(100%)" |
| } |
| ], |
| "section": "Error Type", |
| "sec_num": null |
| }, |
| { |
| "text": "The criteria for judging correctness are determined at three levels as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Metrics", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(1) Detection-level: Binary classification of a given sentence, that is, correct or incorrect, should be completely identical with the gold standard. All error types will be regarded as incorrect.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Metrics", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(2) Identification-level: This level could be considered as a multi-class categorization problem. All error types should be clearly identified. A 1 http://bcc.blcu.edu.cn/hsk correct case should be completely identical with the gold standard of the given error type.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Metrics", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(3) Position-level: In addition to identifying the error types, this level also judges the occurrence range of the grammatical error. That is to say, the system results should be perfectly identical with the quadruples of the gold standard.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Metrics", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Besides the traditional criteria in the past share tasks, Correction-level was introduced to CGED since 2018.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Metrics", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(4) Correction-level: For the error types of Selection and Missing, recommended corrections are required. At most 3 recommended corrections are allowed for each S and M type error. In this level the amount of the corrections recommended would influence the precision and F1 in this level. The trust of the recommendation would be test. The sub-track TOP1 count only one recommended correction, while TOP3 count one hit, if one correction in three hits the golden standard, ignoring its ranking.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Metrics", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The following metrics are measured at all levels with the help of the confusion matrix.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Metrics", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\uf06c False Positive Rate = FP / (FP+TN) \uf06c Accuracy = (TP+TN) / (TP+FP+TN+FN) \uf06c Precision = TP / (TP+FP) \uf06c Recall = TP / (TP+FN) \uf06c F1 =2*Precision*Recall / (Precision +", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Metrics", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Recall) For example, for 4 testing inputs with gold standards shown as \"00038800481, 6, 7, S\", \"00038800481, 8, 8, R\", \"00038800464, correct\", \"00038801261, 9, 9, M\", \"00038801261, 16, 16, S\" and \"00038801320, 19, 25, W\", the system may output the result as \"00038800481, 2, 3, S\", \"00038800481, 4, 5, S\", \"00038800481, 8, 8, R\", \"00038800464, correct\", \"00038801261, 9, 9, M\", \"00038801261, 16, 19, S\" and \"00038801320, 19, 25, M\". The scoring script will yield the following performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Metrics", |
| "sec_num": "4" |
| }, |
| { |
| "text": "False Positive Rate (FPR) = 0 (=0/1) Detection-level: Precision = 1 (=3/3) Recall = 1 (=3/3) F1 = 1 (=(2*1*1)/(1+1)) Identification-level: Precision = 0.8 (=4/5) Recall = 0.8 (=4/5) F1 = 0.8 (=(2*0.8*0.8)/(0.8+08)) Position-level: Precision = 0.3333 (=2/6) Recall = 0.4 (=2/5) F1 = 0.3636 (=(2*0.3333*0.4)/(0.3333+0.4)) <DOC> <TEXT id=\"200307109523200140_2_2x3\"> \u56e0\u4e3a\u517b\u519c\u4f5c\u7269\u65f6\u4e0d\u7528\u519c\u836f\u7684\u8bdd\uff0c\u751f\u4ea7\u7387\u8f83\u4f4e\u3002\u90a3\u80af\u5b9a\u4ef7\u683c\u8981\u4e0a\u5347\uff0c\u90a3\u6709\u94b1\u7684\u4eba\u60f3\u5403\u591a\u5c11\uff0c\u5c31 \u5403\u591a\u5c11\u3002\u5de6\u8fb9\u7684\u6587\u4e2d\u5df2\u63d0\u51fa\u4e86\u4e16\u754c\u4e0a\u7684\u6709\u51e0\u4ebf\u4eba\u56e0\u7f3a\u5c11\u7cae\u98df\u800c\u6328\u997f\u3002 </TEXT> <CORRECTION> \u56e0\u4e3a\u79cd\u690d\u519c\u4f5c\u7269\u65f6\u4e0d\u7528\u519c\u836f\u7684\u8bdd\uff0c\u751f\u4ea7\u7387\u8f83\u4f4e\u3002\u90a3\u4ef7\u683c\u80af\u5b9a\u8981\u4e0a\u5347\uff0c\u90a3\u6709\u94b1\u7684\u4eba\u60f3\u5403\u591a\u5c11\uff0c \u5c31\u5403\u591a\u5c11\u3002\u5de6\u8fb9\u7684\u6587\u4e2d\u5df2\u63d0\u51fa\u4e86\u4e16\u754c\u4e0a\u6709\u51e0\u4ebf\u4eba\u56e0\u7f3a\u5c11\u7cae\u98df\u800c\u6328\u997f\u3002 </CORRECTION> <ERROR start_off=\"3\" end_off=\"3\" type=\"S\"></ERROR> <ERROR start_off=\"22\" end_off=\"25\" type=\"W\"></ERROR> <ERROR start_off=\"57\" end_off=\"57\" type=\"R\"></ERROR> </DOC> <DOC> <TEXT id=\"200210543634250003_2_1x3\"> \u5bf9\u4e8e\"\u5b89\u4e50\u6b7b\"\u7684\u770b\u6cd5\uff0c\u5411\u6765\u90fd\u662f\u4e00\u4e2a\u6781\u5177\u4e89\u8bae\u6027\u7684\u9898\u76ee\uff0c\u56e0\u4e3a\u6bd5\u7adf\u6bcf\u4e2a\u4eba\u5bf9\u4e8e\u6b7b\u4ea1\u7684\u89c2\u5ff5\u90fd \u4e0d\u4e00\u6837\uff0c\u600e\u6837\u7684\u60c5\u51b5\u4e0b\u53bb\u5224\u65ad\uff0c\u4e5f\u81ea\u7136\u4ea7\u751f\u51fa\u5f88\u591a\u4e3b\u89c2\u548c\u5ba2\u89c2\u7684\u7406\u8bba\u3002\u6bcf\u4e2a\u4eba\u90fd\u6709\u7740\u751f\u5b58\u7684 \u6743\u5229\uff0c\u4e5f\u4ee3\u8868\u7740\u6bcf\u4e2a\u4eba\u90fd\u80fd\u53bb\u51b3\u5b9a\u5982\u4f55\u7ed3\u675f\u81ea\u5df1\u7684\u751f\u547d\u7684\u6743\u5229\u3002\u5728\u6211\u7684\u4e2a\u4eba\u89c2\u70b9\u4e2d\uff0c\u5982\u679c\u4e00 \u4e2a\u957f\u671f\u53d7\u7740\u75c5\u9b54\u6298\u78e8\u7684\u4eba\uff0c\u4f1a\u662f\u5341\u5206\u75db\u82e6\u7684\u4e8b\uff0c\u4e0d\u4ec5\u662f\u75c5\u4eba\u672c\u8eab\uff0c\u4ee5\u81f4\u75c5\u8005\u7684\u5bb6\u4eba\u548c\u670b\u53cb\uff0c \u90fd\u662f\u4e00\u4ef6\u96be\u53d7\u7684\u4e8b\u3002 </TEXT> <CORRECTION> \u5bf9\u4e8e\"\u5b89\u4e50\u6b7b\"\u7684\u770b\u6cd5\uff0c\u5411\u6765\u90fd\u662f\u4e00\u4e2a\u6781\u5177\u4e89\u8bae\u6027\u7684\u9898\u76ee\uff0c\u56e0\u4e3a\u6bd5\u7adf\u6bcf\u4e2a\u4eba\u5bf9\u4e8e\u6b7b\u4ea1\u7684\u89c2\u5ff5\u90fd \u4e0d\u4e00\u6837\uff0c\u65e0\u8bba\u5728\u600e\u6837\u7684\u60c5\u51b5\u4e0b\u53bb\u5224\u65ad\uff0c\u90fd\u81ea\u7136\u4ea7\u751f\u51fa\u5f88\u591a\u4e3b\u89c2\u548c\u5ba2\u89c2\u7684\u7406\u8bba\u3002\u6bcf\u4e2a\u4eba\u90fd\u6709\u7740 \u751f\u5b58\u7684\u6743\u5229\uff0c\u4e5f\u4ee3\u8868\u7740\u6bcf\u4e2a\u4eba\u90fd\u80fd\u53bb\u51b3\u5b9a\u5982\u4f55\u7ed3\u675f\u81ea\u5df1\u7684\u751f\u547d\u3002\u5728\u6211\u7684\u4e2a\u4eba\u89c2\u70b9\u4e2d\uff0c\u5982\u679c\u4e00 \u4e2a\u957f\u671f\u53d7\u7740\u75c5\u9b54\u6298\u78e8\u7684\u4eba\u6d3b\u7740\uff0c\u4f1a\u662f\u5341\u5206\u75db\u82e6\u7684\u4e8b\uff0c\u4e0d\u4ec5\u662f\u75c5\u4eba\u672c\u8eab\uff0c\u5bf9\u4e8e\u75c5\u8005\u7684\u5bb6\u4eba\u548c\u670b \u53cb\uff0c\u90fd\u662f\u4e00\u4ef6\u96be\u53d7\u7684\u4e8b\u3002 </CORRECTION> <ERROR start_off=\"46\" end_off=\"46\" type=\"M\"></ERROR> <ERROR start_off=\"56\" end_off=\"56\" type=\"S\"></ERROR> <ERROR start_off=\"106\" end_off=\"108\" type=\"R\"></ERROR> <ERROR start_off=\"133\" end_off=\"133\" type=\"M\"></ERROR> <ERROR start_off=\"151\" end_off=\"152\" type=\"S\"></ERROR> </DOC> Table 6 : Submission statistics for all participants. Table 7 to 11 show the testing results of the CGED2020 in 6 tracks: false positive rate (FPR), detection level, identification level, position level and correction level (in two settings: top1 and top3). All runs of top F1 score are highlighted in the tables. The CYUT achieved the lowest FPR of 0.0163, about one third of the lowest FPR in the CGED 2018. Detection-level evaluations are designed to detect whether a sentence contains grammatical errors or not. A neutral baseline can be easily achieved by reporting all testing sentences containing errors. According to the test data distribution, the baseline system can achieve an accuracy of 0.7893. However, not all systems performed above the baseline. The system result submitted by NJU-NLP achieved the best detection F1 of 0.9122, beating the 0.9 mark for the first time. For identification-level evaluations, the systems need to identify the error types in a given unit. The system developed by Flying and OrangePlus provided the highest F1 score of 0.6736 and 0.6726 for grammatical error identification. For position-level, Flying achieved the best F1 score of 0.4041, crossing the 0.4 mark for the first time. OrangePlus reached 0.394. Perfectly identifying the error types and their corresponding positions is difficult because the error propagation is serious. In correction-level, UNIPUS-Flaubert achieved best F1 of 0.1891 in top1 setting and YD_NLP of 0.1885 top3 setting.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1414, |
| "end": 1421, |
| "text": "Table 6", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 1468, |
| "end": 1475, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Performance Metrics", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In CGED 2020, the implementation of pre-trained model like BERT achieved significant improvement in many tracks. The \"standard pipe-line\" biLSTM+CRF in CGED2017 and 2018 is replaced. Hybrid methods based on pre-trained model were proposed by most of the teams. ResNet, graph convolution network and data argumentation appeared for the first time in the solutions. The rethinking the data construction (including pseudo data generation) and feature selection did not attract the attention of the participants. However, the balance of the FPR and other track did not progress a lot. The rough merging strategies implemented in hybrid methods and the over generation of generation models may lead the drop in FPR. From organizers' perspectives, a good system should have a high F1 score and a low false positive rate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In summary, none of the submitted systems provided a comprehensive superior performance using different metrics, indicating the difficulty of developing systems for effective grammatical error diagnosis, especially in CFL contexts. It is worth noting that in the track of detection, the performance over 0.9 is close to the application of actual scene. In the highly focused track of position and correction, variant teams lead the ranks, unlike the past CGEDs. It's a very exciting phenomena indicating the attraction the task increased quickly. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "This study describes the NLP-TEA 2020 shared task for Chinese grammatical error diagnosis, including task design, data preparation, performance metrics, and evaluation results. Regardless of actual performance, all submissions contribute to the common effort to develop Chinese grammatical error diagnosis system, and the individual reports in the proceedings provide useful insights into computer-assisted language learning for CFL learners.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We hope the data sets collected and annotated for this shared task can facilitate and expedite future development in this research area. Therefore, all data sets with gold standards and scoring scripts are publicly available online at http://www.cged.science.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank all the participants for taking part in our shared task. Lung-Hao Lee helped a lot in consultation and bidding. Xiangyu Chi, Mengyao Suo, Yuhan Wang and Shufan Zhou contributed a lot in data reviewing.This study was supported by the projects from National Language Committee Project (YB135-90).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Error diagnosis of Chinese sentences using inductive learning algorithm and decomposition-based testing mechanism", |
| "authors": [ |
| { |
| "first": "Ru-Yng", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Chung-Hsien", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Philips Kokoh", |
| "middle": [], |
| "last": "Prasetyo", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "ACM Transactions on Asian Language Information Processing", |
| "volume": "11", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ru-Yng Chang, Chung-Hsien Wu, and Philips Kokoh Prasetyo. 2012. Error diagnosis of Chinese sentences using inductive learning algorithm and decomposition-based testing mechanism. ACM Transactions on Asian Language Information Processing, 11(1), article 3.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The Principles for Building the", |
| "authors": [ |
| { |
| "first": "Xiliang", |
| "middle": [], |
| "last": "Cui", |
| "suffix": "" |
| }, |
| { |
| "first": "Bao-Lin", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "International Corpus of Learner Chinese\". Applied Linguistics", |
| "volume": "", |
| "issue": "2", |
| "pages": "100--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiliang Cui, Bao-lin Zhang. 2011. The Principles for Building the \"International Corpus of Learner Chinese\". Applied Linguistics, 2011(2), pages 100-108.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Helping our own: The HOO 2011 pilot shared task", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Dale", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Kilgarriff", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 13th European Workshop on Natural Language Generation(ENLG'11)", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert Dale and Adam Kilgarriff. 2011. Helping our own: The HOO 2011 pilot shared task. In Proceedings of the 13th European Workshop on Natural Language Generation(ENLG'11), pages 1-8, Nancy, France.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "HOO 2012: A report on the preposiiton and determiner error correction shared task", |
| "authors": [ |
| { |
| "first": "Reobert", |
| "middle": [], |
| "last": "Dale", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Anisimoff", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Narroway", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 7th Workshop on the Innovative Use of NLP for Building Educational Applications(BEA'12)", |
| "volume": "", |
| "issue": "", |
| "pages": "54--62", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reobert Dale, Ilya Anisimoff, and George Narroway. 2012. HOO 2012: A report on the preposiiton and determiner error correction shared task. In Proceedings of the 7th Workshop on the Innovative Use of NLP for Building Educational Applications(BEA'12), pages 54-62, Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The CoNLL-2014 shared task on grammatical error correction", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hwee Tou Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Mei", |
| "middle": [], |
| "last": "Siew", |
| "suffix": "" |
| }, |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Briscoe", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "Hendy" |
| ], |
| "last": "Hadiwinoto", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Susanto", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bryant", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 18th Conference on Computational Natural Language Learning (CoNLL'14): Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the 18th Conference on Computational Natural Language Learning (CoNLL'14): Shared Task, pages 1-12, Baltimore, Maryland, USA.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "The CoNLL-2013 shared task on grammatical error correction", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hwee Tou Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Mei", |
| "middle": [], |
| "last": "Siew", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuanbin", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Joel", |
| "middle": [], |
| "last": "Hadiwinoto", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tetreault", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 17th Conference on Computational Natural Language Learning(CoNLL'13", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hwee Tou Ng, Siew Mei Wu, Yuanbin Wu, Christian Hadiwinoto, and Joel Tetreault. 2013. The CoNLL-2013 shared task on grammatical error correction. In Proceedings of the 17th Conference on Computational Natural Language Learning(CoNLL'13):", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Shared Task", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1--14", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shared Task, pages 1-14, Sofia, Bulgaria.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Developing learner corpus annotation for Chinese grammatical errors", |
| "authors": [ |
| { |
| "first": "Lung-Hao", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Li-Ping", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuen-Hsien", |
| "middle": [], |
| "last": "Tseng", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 20th International Conference on Asian Language Processing (IALP'16)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lung-Hao Lee, Li-Ping Chang, and Yuen-Hsien Tseng. 2016. Developing learner corpus annotation for Chinese grammatical errors. In Proceedings of the 20th International Conference on Asian Language Processing (IALP'16), Tainan, Taiwan.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Linguistic rules based Chinese error detection for second language learning", |
| "authors": [ |
| { |
| "first": "Lung-Hao", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Li-Ping", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuei-Ching", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuen-Hsien", |
| "middle": [], |
| "last": "Tseng", |
| "suffix": "" |
| }, |
| { |
| "first": "Hsin-Hsi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 21st International Conference on Computers in Education(ICCE'13)", |
| "volume": "", |
| "issue": "", |
| "pages": "27--29", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lung-Hao Lee, Li-Ping Chang, Kuei-Ching Lee, Yuen-Hsien Tseng, and Hsin-Hsi Chen. 2013. Linguistic rules based Chinese error detection for second language learning. In Proceedings of the 21st International Conference on Computers in Education(ICCE'13), pages 27-29, Denpasar Bali, Indonesia.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis", |
| "authors": [ |
| { |
| "first": "Lung-Hao", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang-Chih", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Li-Ping", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'15)", |
| "volume": "", |
| "issue": "", |
| "pages": "1--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lung-Hao Lee, Liang-Chih Yu, and Li-Ping Chang. 2015. Overview of the NLP-TEA 2015 shared task for Chinese grammatical error diagnosis. In Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'15), pages 1-6, Beijing, China.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A sentence judgment system for grammatical error detection", |
| "authors": [ |
| { |
| "first": "Lung-Hao", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang-Chih", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuei-Ching", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuen-Hsien", |
| "middle": [], |
| "last": "Tseng", |
| "suffix": "" |
| }, |
| { |
| "first": "Li-Ping", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Hsin-Hsi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 25th International Conference on Computational Linguistics (COLING'14): Demos", |
| "volume": "", |
| "issue": "", |
| "pages": "67--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lung-Hao Lee, Liang-Chih Yu, Kuei-Ching Lee, Yuen-Hsien Tseng, Li-Ping Chang, and Hsin-Hsi Chen. 2014. A sentence judgment system for grammatical error detection. In Proceedings of the 25th International Conference on Computational Linguistics (COLING'14): Demos, pages 67-70, Dublin, Ireland.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Shared Task for Chinese Grammatical Error Diagnosis", |
| "authors": [], |
| "year": null, |
| "venue": "The Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA' 16)", |
| "volume": "", |
| "issue": "", |
| "pages": "1--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shared Task for Chinese Grammatical Error Diagnosis. The Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA' 16), pages 1-6, Osaka, Japan.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "IJCNLP-2017 Task 1: Chinese Grammatical Error Diagnosis", |
| "authors": [ |
| { |
| "first": "Gaoqi", |
| "middle": [], |
| "last": "Rao", |
| "suffix": "" |
| }, |
| { |
| "first": "Baolin", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Endong", |
| "middle": [], |
| "last": "Xun", |
| "suffix": "" |
| }, |
| { |
| "first": "Lung-Hao", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the IJCNLP 2017", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gaoqi Rao, Baolin Zhang, Endong Xun, Lung-Hao Lee. IJCNLP-2017 Task 1: Chinese Grammatical Error Diagnosis. In Proceedings of the IJCNLP 2017, Shared Tasks, Taipei, Taiwan: 1-8", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Overview of NLPTEA-2018 Share Task Chinese Grammatical Error Diagnosis", |
| "authors": [ |
| { |
| "first": "Gaoqi", |
| "middle": [], |
| "last": "Rao", |
| "suffix": "" |
| }, |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Gong", |
| "suffix": "" |
| }, |
| { |
| "first": "Baolin", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Endong", |
| "middle": [], |
| "last": "Xun", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA'18)", |
| "volume": "", |
| "issue": "", |
| "pages": "42--51", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gaoqi Rao, Qi Gong, Baolin Zhang, Endong Xun. Overview of NLPTEA-2018 Share Task Chinese Grammatical Error Diagnosis. 2018. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA'18), pages 42-51, Melbourne, Australia.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Sentence correction incorporating relative position and parse template language models", |
| "authors": [ |
| { |
| "first": "Chung-Hsien", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Chao-Hong", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Harris", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang-Chih", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "IEEE Transactions on Audio, Speech, and Language Processing", |
| "volume": "18", |
| "issue": "6", |
| "pages": "1170--1181", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chung-Hsien Wu, Chao-Hong Liu, Matthew Harris, and Liang-Chih Yu. 2010. Sentence correction incorporating relative position and parse template language models. IEEE Transactions on Audio, Speech, and Language Processing, 18(6), pages 1170-1181.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Detecting word ordering errors in Chinese sentences for learning Chinese as a foreign language", |
| "authors": [ |
| { |
| "first": "Chi-Hsin", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hsin-Hsi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING'12)", |
| "volume": "", |
| "issue": "", |
| "pages": "3003--3017", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chi-Hsin Yu and Hsin-Hsi Chen. 2012. Detecting word ordering errors in Chinese sentences for learning Chinese as a foreign language. In Proceedings of the 24th International Conference on Computational Linguistics (COLING'12), pages 3003-3017, Bombay, India.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Overview of grammatical error diagnosis for learning Chinese as foreign language", |
| "authors": [ |
| { |
| "first": "Liang-Chih", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lung-Hao", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Li-Ping", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 1stWorkshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'14)", |
| "volume": "", |
| "issue": "", |
| "pages": "42--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang-Chih Yu, Lung-Hao Lee, and Li-Ping Chang. 2014. Overview of grammatical error diagnosis for learning Chinese as foreign language. In Proceedings of the 1stWorkshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'14), pages 42-47, Nara, Japan.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Design Concepts of \"the Construction and Research of the Inter-language Corpus of Chinese from Global Learners", |
| "authors": [ |
| { |
| "first": "Bao-Lin", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiliang", |
| "middle": [], |
| "last": "Cui", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Language Teaching and Linguistic Study", |
| "volume": "", |
| "issue": "5", |
| "pages": "27--34", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bao-lin Zhang, Xiliang Cui. 2013. Design Concepts of \"the Construction and Research of the Inter-language Corpus of Chinese from Global Learners\". Language Teaching and Linguistic Study, 2013(5), pages 27-34.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Chinese Grammatical Error Diagnosis with Long Short-Term Memory Networks", |
| "authors": [ |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Wanxiang", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiang", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "proceedings of 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA'16", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bo Zheng, Wanxiang Che, Jiang Guo, Ting Liu. 2016. Chinese Grammatical Error Diagnosis with Long Short-Term Memory Networks. In proceedings of 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA'16),", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Alibaba at IJCNLP-2017 Task 2: A Boosted Deep System for Dimensional Sentiment Analysis of Chinese Phrases", |
| "authors": [ |
| { |
| "first": "Xin", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xu", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Changlong", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Luo", |
| "middle": [], |
| "last": "Si", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "proceedings of the IJCNLP 2017, Shared Tasks", |
| "volume": "", |
| "issue": "", |
| "pages": "100--110", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xin Zhou, Jian Wang, Xu Xie, Changlong Sun, Luo Si. Alibaba at IJCNLP-2017 Task 2: A Boosted Deep System for Dimensional Sentiment Analysis of Chinese Phrases. In proceedings of the IJCNLP 2017, Shared Tasks, pages 100-110, Taipei, China.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Chinese Grammatical Error Diagnosis using Statistical and Prior Knowledge driven Features with Probabilistic Ensemble Enhancement", |
| "authors": [ |
| { |
| "first": "Ruiji", |
| "middle": [], |
| "last": "Fu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengqi", |
| "middle": [], |
| "last": "Pei", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiefu", |
| "middle": [], |
| "last": "Gong", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| }, |
| { |
| "first": "Dechuan", |
| "middle": [], |
| "last": "Teng", |
| "suffix": "" |
| }, |
| { |
| "first": "Wanxiang", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "Shijin", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Guoping", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'18)", |
| "volume": "", |
| "issue": "", |
| "pages": "52--59", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ruiji Fu, Zhengqi Pei, Jiefu Gong, Wei Song, Dechuan Teng, Wanxiang Che, Shijin Wang, Guoping Hu, Ting Liu. Chinese Grammatical Error Diagnosis using Statistical and Prior Knowledge driven Features with Probabilistic Ensemble Enhancement. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'18), pages 52-59, Melbourne, Australia.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF2": { |
| "text": "The statistics of correct sentences in testing set.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF3": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF4": { |
| "text": "A training sentence denoted in SGML format.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td/><td/><td colspan=\"2\">System Results</td></tr><tr><td colspan=\"2\">Confusion Matrix</td><td/><td/></tr><tr><td/><td/><td>Positive (Erroneous)</td><td>Negative (Correct)</td></tr><tr><td>Gold Standard</td><td>Positive Negative</td><td>TP (True Positive) FP (False Positive)</td><td>FN (False Negative) TN (True Negative)</td></tr></table>" |
| }, |
| "TABREF5": { |
| "text": "Confusion matrix for evaluation.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF6": { |
| "text": "summarizes the submission statistics for the 17 participating teams. In the official testing phase, each participating team was allowed to submit at most three runs. Of the 17 teams, 11 teams submitted their testing results in Correction-level, for a total of 43 runs.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>Participant (Ordered by names)</td><td>#Runs</td><td>Correction-level</td></tr></table>" |
| } |
| } |
| } |
| } |