ACL-OCL / Base_JSON /prefixY /json /Y18 /Y18-1046.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y18-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:35:47.351130Z"
},
"title": "Automatic Error Correction on Japanese Functional Expressions Using Character-based Neural Machine Translation",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"country": "Japan"
}
},
"email": ""
},
{
"first": "Fei",
"middle": [],
"last": "Cheng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Informatics",
"location": {
"country": "Japan"
}
},
"email": "fei-cheng@nii.ac.jp"
},
{
"first": "Yiran",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"country": "Japan"
}
},
"email": "wang.yiran.ws5@is.naist.jp"
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"country": "Japan"
}
},
"email": "shindo@is.naist.jp"
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {
"country": "Japan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Correcting spelling and grammatical errors of Japanese functional expressions shows practical usefulness for Japanese Second Language (JSL) learners. However, the collection of these types of error data is difficult because it relies on detecting Japanese functional expressions first. In this paper, we propose a framework to correct the spelling and grammatical errors of Japanese functional expressions as well as the error data collection problem. Firstly, we apply a bidirectional Long Short-Term Memory with a Conditional Random Field (BiLSTM-CRF) model to detect Japanese functional expressions. Secondly, we extract phrases which include Japanese functional expressions as well as their neighboring words from native Japanese and learners' corpora. Then we generate a large scale of artificial error data via substitution, injection and deletion operations. Finally, we utilize the generated artificial error data to train a sequence-to-sequence neural machine translation model for correcting Japanese functional expression errors. We also compare the character-based method with the word-based method. The experimental results indicate that the character-based method outperforms the wordbased method both on artificial error data and real error data.",
"pdf_parse": {
"paper_id": "Y18-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "Correcting spelling and grammatical errors of Japanese functional expressions shows practical usefulness for Japanese Second Language (JSL) learners. However, the collection of these types of error data is difficult because it relies on detecting Japanese functional expressions first. In this paper, we propose a framework to correct the spelling and grammatical errors of Japanese functional expressions as well as the error data collection problem. Firstly, we apply a bidirectional Long Short-Term Memory with a Conditional Random Field (BiLSTM-CRF) model to detect Japanese functional expressions. Secondly, we extract phrases which include Japanese functional expressions as well as their neighboring words from native Japanese and learners' corpora. Then we generate a large scale of artificial error data via substitution, injection and deletion operations. Finally, we utilize the generated artificial error data to train a sequence-to-sequence neural machine translation model for correcting Japanese functional expression errors. We also compare the character-based method with the word-based method. The experimental results indicate that the character-based method outperforms the wordbased method both on artificial error data and real error data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Japanese Language has various types of functional expressions which consist of more than one word including both content words and functional words, such as \"\u3092\u8e0f\u307e\u3048\u3066 (based on), \u306b \u9055 \u3044 \u306a \u3044 (no doubt), \u3066 \u306f \u3044 \u3051 \u306a \u3044 (must not)\". Due to the various meanings and usages, spelling and grammatical errors are often made by JSL learners when they use Japanese functional expressions in their writings. We observed some example sentences in Lang-8 Learner Corpora 1 and summarized some typical types of spelling and grammatical errors of Japanese functional expressions, including word selection error (S), missing word error (M), redundant error (R), and word spelling error (W). Some example sentences of grammatical errors are shown in Table 1 . Much previous research has paid special attention to the automatic detection of Japanese functional expressions (Tsuchiya et al., 2006; Shime et al., 2007; Suzuki et al., 2012) while relatively few grammatical error correction applications have been developed to support JSL learners. Given this situation, automatic grammatical error correction of sentences written by JSL learners is essential in Japanese language learning.",
"cite_spans": [
{
"start": 853,
"end": 876,
"text": "(Tsuchiya et al., 2006;",
"ref_id": "BIBREF36"
},
{
"start": 877,
"end": 896,
"text": "Shime et al., 2007;",
"ref_id": "BIBREF32"
},
{
"start": 897,
"end": 917,
"text": "Suzuki et al., 2012)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 731,
"end": 738,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we define a new task of correcting spelling and grammatical errors on Japanese functional expressions as follows. Given a phrase of a Japanese functional expressions and its neighboring words, our system aims to correct errors inside this phrase. For instance, a phrase \" \u884c\u304f\u307e\u3057\u3087\u3046\u3002 (Let's go.)\" where the Japanese functional expression is in bold will be expected to be corrected as \"\u884c\u304d\u307e\u3057\u3087\u3046\u3002\", because the correct usage of Japanese verb conjugation rules in this phrase depends on the Japa-nese functional expression \"\u307e\u3057\u3087\u3046 (Let's)\". However, collecting a large number of available real error phrases written by JSL language learners is not easy because of relying on detecting Japanese functional expressions first. To solve this problem, we first detect the Japanese functional expressions using a BiLSTM-CRF model. Next, we extract phrases including Japanese functional expressions as well as their neighboring words for generating artificial error data. For Table1: Typical examples of grammatical errors of Japanese functional expressions. In the sentences, Japanese functional expressions are in bold, while errors are underlined. automatic error correction, we utilize a neural sequence-to-sequence model to treat spelling and grammatical error correction as a translation process from incorrect character sequences to correct character sequences. We also conduct our experiments with the word-based method as a baseline for comparison.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows: Section 2 reviews some related work on spelling and grammatical error correction. Section 3 introduces language resources used in our work for training BiLSTM-CRF model and generating artificial error data. Section 4 describes how BiLSTM-CRF model is used to detect Japanese functional expressions and Section 5 explains the method for generating artificial error data. In Section 6, we conduct the experiments using neural machine translation and analyze the results. Section 7 concludes with a summation of this work and describes our future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error",
"sec_num": null
},
{
"text": "Spelling correction is an automatic algorithm for detecting and correcting human spelling errors in every written language, which has been an active research in Natural Language Processing (NLP) (Sun et al. 2010; Chen et al. 2013; Liu et al. 2013; Liu et al. 2015; ) .",
"cite_spans": [
{
"start": 195,
"end": 212,
"text": "(Sun et al. 2010;",
"ref_id": "BIBREF33"
},
{
"start": 213,
"end": 230,
"text": "Chen et al. 2013;",
"ref_id": "BIBREF1"
},
{
"start": 231,
"end": 247,
"text": "Liu et al. 2013;",
"ref_id": "BIBREF18"
},
{
"start": 248,
"end": 264,
"text": "Liu et al. 2015;",
"ref_id": "BIBREF17"
},
{
"start": 265,
"end": 266,
"text": ")",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Grammatical error correction (GEC) is a task of detecting and correcting grammatical errors in text written by native language writers or nonnative foreign language writers. over the past few decades, GEC in English has been widely researched, such as Helping Our Own (Dale and Kigarriff, 2011; Dale et al., 2012) , CoNLL Shared Task (Ng et al., 2013; Ng et al., 2014) . Many shared tasks on GEC for Chinese Second Language Learners have also been held, such as the NLP-TEA Shared Task (Yu et al., 2014; Lee et al., 2015; Lee et al., 2016; Rao et al., 2017; Rao et al., 2018) . On Japanese GEC, much work has been done on particle error correction for JSL learners (Oyama, 2010; Ohki et al., 2011; Mizumoto et al., 2011; Imamura et al., 2014) .",
"cite_spans": [
{
"start": 268,
"end": 294,
"text": "(Dale and Kigarriff, 2011;",
"ref_id": null
},
{
"start": 295,
"end": 313,
"text": "Dale et al., 2012)",
"ref_id": "BIBREF5"
},
{
"start": 334,
"end": 351,
"text": "(Ng et al., 2013;",
"ref_id": "BIBREF25"
},
{
"start": 352,
"end": 368,
"text": "Ng et al., 2014)",
"ref_id": "BIBREF24"
},
{
"start": 486,
"end": 503,
"text": "(Yu et al., 2014;",
"ref_id": "BIBREF38"
},
{
"start": 504,
"end": 521,
"text": "Lee et al., 2015;",
"ref_id": "BIBREF16"
},
{
"start": 522,
"end": 539,
"text": "Lee et al., 2016;",
"ref_id": null
},
{
"start": 540,
"end": 557,
"text": "Rao et al., 2017;",
"ref_id": "BIBREF30"
},
{
"start": 558,
"end": 575,
"text": "Rao et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 665,
"end": 678,
"text": "(Oyama, 2010;",
"ref_id": "BIBREF28"
},
{
"start": 679,
"end": 697,
"text": "Ohki et al., 2011;",
"ref_id": "BIBREF26"
},
{
"start": 698,
"end": 720,
"text": "Mizumoto et al., 2011;",
"ref_id": "BIBREF21"
},
{
"start": 721,
"end": 742,
"text": "Imamura et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Collecting large-scale annotated error data written by second language learners is not so easy. To cope with grammatical error data scarcity, several studies proposed effective approaches for generating artificial error data (Irmawati et al., 2017; Rei et al., 2017) .",
"cite_spans": [
{
"start": 225,
"end": 248,
"text": "(Irmawati et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 249,
"end": 266,
"text": "Rei et al., 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several approaches of using Statistical machine translation (SMT) for GEC have been proposed (Brockett et al., 2006; Mizumoto et al., 2011; Mizumoto et al., 2015) . Recently, neural networks have shown success in many NLP tasks, such as machine translation (MT) (Eriguchi et al., 2016; Gehring et al., 2017) , named en-tity recognition (NER) (Kuru et al., 2016; Misawa et al., 2017) and etc. For GEC, several studies have applied neural machine translation (NMT) approach (Chollampatt et al., 2016; Yuan and Briscoe, 2016) . NMT is applied in the GEC task as it may be possible to correct erroneous phrases and sentences that have not been seen in the training data more effectively (Luong et al., 2015) . NMT-based systems thus may help ameliorate the shortage of large error-annotated learner corpora for GEC.",
"cite_spans": [
{
"start": 93,
"end": 116,
"text": "(Brockett et al., 2006;",
"ref_id": "BIBREF0"
},
{
"start": 117,
"end": 139,
"text": "Mizumoto et al., 2011;",
"ref_id": "BIBREF21"
},
{
"start": 140,
"end": 162,
"text": "Mizumoto et al., 2015)",
"ref_id": null
},
{
"start": 262,
"end": 285,
"text": "(Eriguchi et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 286,
"end": 307,
"text": "Gehring et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 342,
"end": 361,
"text": "(Kuru et al., 2016;",
"ref_id": "BIBREF13"
},
{
"start": 362,
"end": 382,
"text": "Misawa et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 472,
"end": 498,
"text": "(Chollampatt et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 499,
"end": 522,
"text": "Yuan and Briscoe, 2016)",
"ref_id": "BIBREF39"
},
{
"start": 683,
"end": 703,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As previous research mentioned above, few studies have aimed at spelling and grammatical error corrections on Japanese functional expressions. Therefore, our paper is an attempt to do this work using neural machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We use the following corpora for training the BiLSTM-CRF model to detect Japanese functional expressions. We use Lang-8 Learner, Tatoeba, HiraganaTimes corpora for generating artificial error data, because these three corpora are particularly designed for Japanese second language learners, in which the sentences are easy for them to read and understand. The details of these corpora are as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language resources",
"sec_num": "3"
},
{
"text": "\u30fbLang-8 Learner Corpora: this is a largescale error-annotated learner corpora, covering 80 languages. We use only the Lang-8 corpus of Japanese learners, which consists of approximately 2M sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language resources",
"sec_num": "3"
},
{
"text": "\u30fbTatoeba 2 : this corpus is a free online database of example sentences written by foreign language learners. We use only Japanese sentences (approximately 170K) from this corpus. \u30fb HiraganaTimes 3 : this corpus is a Japanese-English bilingual corpus of magazines articles, which introduces Japan to non-Japanese, covering a wide range of topics including society, culture, history, etc. We use only Japanese sentences (approximately 150K) from this corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language resources",
"sec_num": "3"
},
{
"text": "\u30fbBCCWJ 4 : The Balanced Corpus of Contemporary Written Japanese (BCCWJ) is a corpus created for comprehending the breadth of contemporary written Japanese. The data comprises 104.3 million words, covering genres including general books and magazines, newspa-4 Detection of Japanese Functional Expressions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language resources",
"sec_num": "3"
},
{
"text": "Since we treat the Japanese functional expressions detection task as a character-based sequence labeling problem, we split the word in a Table 2 shows an example sentence (\u30a2\u30e1\u30ea\u30ab\u3078\u884c\u304d\u307e\u3057\u3087\u3046\u3002 \"Let's go to America.\") after pre-processing. Table 2 : An example sentence after pre-processing",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 2",
"ref_id": null
},
{
"start": 231,
"end": 238,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Pre-processing",
"sec_num": "4.1"
},
{
"text": "Character Label \u30a2 B \u30e1 I \u30ea I \u30ab E \u3078 O \u884c B \u304d E \u307e B-SP \u3057 I-SP \u3087 I-SP \u3046 E-SP \u3002 O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Pre-processing",
"sec_num": "4.1"
},
{
"text": "The BiLSTM-CRF model (Huang et al., 2015) consists of three major parts: the embedding layer, the bi-directional LSTM layer, and the CRF layer.",
"cite_spans": [
{
"start": 21,
"end": 41,
"text": "(Huang et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM-CRF Model",
"sec_num": "4.2"
},
{
"text": "As shown in figure 1, every character in sentence is represented as character embedding as input. The bidirectional LSTM layer is used to operate sequential information in two opposite directions. The CRF layer predicts correlated tag sequence under consideration of outputs from the Table 3 : Examples of detection of Japanese functional expressions.",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 291,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "BiLSTM-CRF Model",
"sec_num": "4.2"
},
{
"text": "In the sentences, Japanese functional expressions are in bold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM-CRF Model",
"sec_num": "4.2"
},
{
"text": "LSTM layer. In this work, we adopt a BiLSTM-CRF implementation 5 with the following empirical hyper-parameter setting. We apply 300dimensional randomly initialized character embddings and 300-dimensional hidden state for LSTM. We choose Adam as the optimizer and set the learning rate equal to 0.001.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM-CRF Model",
"sec_num": "4.2"
},
{
"text": "We collect some sentences containing Japanese functional expressions from the following corpora: Lang-8 Learner, Tatoeba, HiraganaTimes, BCCWJ. In addition, we also collect sentences from some Japanese functional expression dictionaries (Group Jamashi and Xu, 2001; Xu and Reika, 2013) .",
"cite_spans": [
{
"start": 244,
"end": 265,
"text": "Jamashi and Xu, 2001;",
"ref_id": "BIBREF12"
},
{
"start": 266,
"end": 285,
"text": "Xu and Reika, 2013)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment and Evaluation",
"sec_num": "4.3"
},
{
"text": "As the results, we use 21,458 sentences for training data, and 916 sentences for test data. In the training data and test data, some sentences collected from Lang-8 Learner Corpora contain real spelling errors, since we would like to see if the Bi-LSTM-CRF model can detect Japanese functional expressions with spelling errors. All the sentences are first segmented into individual words using a free Japanese morphological analyzer MeCab 6 . Then the words are split into characters and manually annotated with tags after pre-processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment and Evaluation",
"sec_num": "4.3"
},
{
"text": "As evaluation metrics, we use precision, recall and F 1 -score as shown in the following formulas. We evaluate the output of Japanese functional ex-pressions as a whole word level. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment and Evaluation",
"sec_num": "4.3"
},
{
"text": "In this section, we apply our Japanese functional expression detector, which is trained with the BiLSTM-CRF model in Section 4.2 to extract phrases which include Japanese functional expressions with their neighboring words for generating artificial error data. Our method mainly consists of two steps, as shown in Figure 2 . In Step 1, we first extracted real error phrases from Lang-8 Leaner Corpora using the BiLSTM-CRF model. As the results, we extracted total 609 real error phrases. According to our observation, every real error phrase contains only one grammatical error or one spelling error on Japanese functional expression. Since the data of real error phrases is very small, which is not far from enough for training data, we then extracted phrases in corrected sentences from Lang-8 Leaner Corpora, and native phrases from Tatoeba and HiraganaTimes corpora. Table 5 shows several extraction results of phrases of Japanese functional expressions.",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 322,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 871,
"end": 878,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Artificial Error Generation",
"sec_num": "5"
},
{
"text": "In",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Error Generation",
"sec_num": "5"
},
{
"text": "Step 2, we randomly selected 309 real error phrases extracted in Step 1 as the error templates and the remaining 300 real error phrases were used as test data in our error correction task. We generated artificial error data by using the following three operations to imitate typical errors: Substitution, Injection and Deletion. In particu-lar, we generated artificial error data by imitating the error templates when using injection and deletion operations accounted for the majority. Table 6 shows a few examples of artificial error generation. As the results, we generated 396,663 phrase pairs of artificial error data. The same as the real error phrase, every artificial error phrase also involves only one grammatical error or one spelling error on Japanese functional expression.",
"cite_spans": [],
"ref_spans": [
{
"start": 486,
"end": 493,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Artificial Error Generation",
"sec_num": "5"
},
{
"text": "\u30fbSubstitution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Error Generation",
"sec_num": "5"
},
{
"text": "This method replaces a correct verb that appear just before a Japanese functional expression with its other conjugated forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Error Generation",
"sec_num": "5"
},
{
"text": "\u30fbInjection:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Error Generation",
"sec_num": "5"
},
{
"text": "This method injects a redundant character in a Japanese functional expression or in its neighboring word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Error Generation",
"sec_num": "5"
},
{
"text": "\u30fbDeletion:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Error Generation",
"sec_num": "5"
},
{
"text": "This method deletes a character in a Japanese functional expression or in its neighboring word. Input: \u3042\u306a\u305f\u306f\u85ac\u3092\u98f2\u307e\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3002 (You must take the medicine.) Output: \u3042\u306a\u305f \u306f \u85ac \u3092 \u98f2\u307e \u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044 \u3002 Extracted phrase: \u98f2\u307e \u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044 \u3002 Input: \u30e9\u30b8\u30aa\u3092\u4fee\u7406\u3059\u308b\u305f\u3081\u306b\u5206\u89e3\u3057\u305f\u3002 (I took the radio apart to repair it.) Output: \u30e9\u30b8\u30aa \u3092 \u4fee\u7406 \u3059\u308b \u305f\u3081\u306b \u5206\u89e3 \u3057 \u305f \u3002 Extracted phrase: \u3059\u308b \u305f\u3081\u306b \u5206\u89e3 Input: \u5f7c\u306f\u5929\u624d\u304b\u3082\u3057\u308c\u306a\u3044\u3002 (He may be a genius.) Output: \u5f7c \u306f \u5929\u624d \u304b\u3082\u3057\u308c\u306a\u3044 \u3002 Extracted phrase: \u5929\u624d \u304b\u3082\u3057\u308c\u306a\u3044 \u3002 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Error Generation",
"sec_num": "5"
},
{
"text": "Extracted phrase: \u591a\u3044 \u304a\u304b\u3052\u3067 \u5f7c Artificial error data: \u591a\u3044 \u304b\u3052\u3067 \u5f7c Table 6 : Examples of artificial error generation. In the sentences, Japanese functional expressions are in bold, while artificial errors are underlined.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Deletion",
"sec_num": null
},
{
"text": "In this paper, spelling and grammatical error correction is treated as a translation task from incorrect phrases into correct phrases. Based on empirical observation, correcting grammatical errors on Japanese functional expressions can be mainly seen as substitution, injection, deletion operations of characters. The character-based translation process is a natural choice to handle this task. In the meanwhile, the word-based process will suffer from the sparsity of error types, especially when facing the real data. Therefore, we proposed a character-based neural sequenceto-sequence model for the task of correcting grammatical errors on Japanese functional expressions. We also perform the word-based process as a baseline for comparison.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Sequence-to-Sequence Model",
"sec_num": "6.1"
},
{
"text": "The neural sequence-to-sequence model, consists of two main pieces: an encoder that processes the input and a decoder that generates the output. Both the encoder and the decoder are recurrent neural network (RNN) layers that can be implemented using a vanilla RNN, a Long Shortterm Memory (LSTM), or a gated recurrent unit (GRU). In the basic sequence-to-sequence model, the encoder processes the input sequence into a fixed representation that is fed into the decoder as a context. The decoder then uses some mechanism to decode the processed information into an output sequence. The basic architecture is shown in Figure 3 (Sutskever et al., 2014; Cho et al., 2014) . In this paper, we trained a 2-layer LSTM sequence-to-sequence model with 128dim hidden units and embeddings for 12 epochs.",
"cite_spans": [
{
"start": 625,
"end": 649,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF34"
},
{
"start": 650,
"end": 667,
"text": "Cho et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 616,
"end": 624,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Neural Sequence-to-Sequence Model",
"sec_num": "6.1"
},
{
"text": "We used a drop value of 0.2. The formulas of LSTM can be found in the following equations, where the ;,=,>,? (A) , ;,=,>,? (A) and ;,=,>,? (A) are the lth layer's trainable parameters, the \u2299 means point-wise multiplication and the and tanh refers to sigmoid and hyperbolic tangent function respectively. The hidden state of current layer \u210e H (A) will be fed to next layer as input H (AJ5) . ",
"cite_spans": [
{
"start": 109,
"end": 112,
"text": "(A)",
"ref_id": null
},
{
"start": 123,
"end": 126,
"text": "(A)",
"ref_id": null
},
{
"start": 139,
"end": 142,
"text": "(A)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Sequence-to-Sequence Model",
"sec_num": "6.1"
},
{
"text": "As mentioned in Section 5, we ultimately got 396,663 artificial error phrase pairs. In the first experiment, we used 326,663 phrase pairs for training data, 35,000 phrase pairs for development data, and 35,000 phrase pairs for test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "6.2"
},
{
"text": "In the final experiment, we used the remaining 300 real error phrase pairs mentioned in Section 5 for another test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "6.2"
},
{
"text": "In both experiments, we proposed two methods: one is the word-based method where the input phrase is split to word sequences, the other is character-based method where the input phrase is split to character sequences. We performed the word-based method as a baseline for comparison. Table 7 : Experimental results of error correction on Japanese functional expressions.",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 290,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "6.2"
},
{
"text": "Character-based",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No. Word-based",
"sec_num": null
},
{
"text": "1 input \u52a9\u3051\u308b \u307e\u3057\u3087\u3046 \uff01 \u52a9 \u3051 \u308b \u307e \u3057 \u3087 \u3046 \uff01 output \u9589\u3058 \u307e\u3057\u3087\u3046 \uff01(Wrong) \u52a9 \u3051 \u307e \u3057 \u3087 \u3046 \uff01(Correct) 2 input \u54e1 \u306e\u304b\u3052\u3067 \u3001 \u54e1 \u306e \u304b \u3052 \u3067 \u3001 output \u547c\u3073\u304b\u3051 \u306e\u304a\u304b\u3052\u3067 \u3001(Wrong) \u54e1 \u306e \u304a \u304b \u3052 \u3067 \u3001(Correct) 3 input \u98fe\u308b \u306e\u305f\u3081\u306b \u3001 \u98fe \u308b \u306e \u305f \u3081 \u306b \u3001 output \u52a9\u3051\u308b \u305f\u3081\u306b \u3001(Wrong) \u98fe \u308b \u305f \u3081 \u306b \u3001(Correct) 4 input \u601d\u3044\u6d6e\u304b\u3076 \u304b\u3082\u3057\u308a\u307e\u305b\u3093 \u3002 \u601d \u3044 \u6d6e \u304b \u3076 \u304b \u3082 \u3057 \u308a \u307e \u305b \u3093 \u3002 output",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No. Word-based",
"sec_num": null
},
{
"text": "\u8fd1\u3044 \u304b\u3082\u3057\u308c\u307e\u305b\u3093 \u3002(Wrong) \u601d \u3044 \u6d6e \u304b \u3076 \u304b \u3082 \u3057 \u308c \u307e \u305b \u3093 \u3002(Correct) 5 input \u76f4\u305b \u306a\u3044\u3068\u3044\u3051\u307e\u305b\u3093 \u306d \u76f4 \u305b \u306a \u3044 \u3068 \u3044 \u3051 \u307e \u305b \u3093 \u306d output \u76f4\u305b \u306a\u3044\u3068\u3044\u3051\u307e\u305b\u3093 \u306d (Wrong) \u76f4 \u3055 \u306a \u3044 \u3068 \u3044 \u3051 \u307e \u305b \u3093 \u306d (Correct) 6 input \u3059\u308b \u5f8c\u306b \u57c3 \u3059 \u308b \u5f8c \u306b \u57c3 output \u3057 \u305f\u5f8c\u306b \u7720\u304f (Wrong) \u3057 \u305f \u5f8c \u306b \u8cc7 (Wrong) gold result \u3057 \u305f\u5f8c\u306b \u57c3 \u3057 \u305f \u5f8c \u306b \u57c3 7 input \u63a2\u305b \u306a\u3044\u3068\u3044\u3051\u307e\u305b\u3093 \u3002 \u63a2 \u305b \u306a \u3044 \u3068 \u3044 \u3051 \u307e \u305b \u3093 \u3002 output \u3068\u3089 \u306a\u3044\u3068\u3044\u3051\u307e\u305b\u3093 \u3002(Wrong) \u63a2 \u305b \u306a \u3044 \u3068 \u3044 \u3051 \u307e \u305b \u3093 \u3002(Wrong) gold result \u63a2\u3055 \u306a\u3044\u3068\u3044\u3051\u307e\u305b\u3093 \u3002 \u63a2 \u3055 \u306a \u3044 \u3068 \u3044 \u3051 \u307e \u305b \u3093 \u3002 8 input \u305d\u308c\u306e \u305f\u3081\u306b \u6d41\u884c \u305d \u308c \u306e \u305f \u3081 \u306b \u6d41 \u884c output \u305d\u308c\u306e \u305f\u3081\u306b \u901a\u308a\u904e\u304e (Wrong) \u305d\u308c \u305f \u3081 \u306b \u884c \u7720 (Wrong) gold result \u305d\u306e \u305f\u3081\u306b \u6d41\u884c \u305d \u306e \u305f \u3081 \u306b \u6d41 \u884c Table 8 : Examples of system outputs tested on real error data. In the phrases, the Japanese functional expressions are in bold, while errors are underlined.",
"cite_spans": [],
"ref_spans": [
{
"start": 527,
"end": 534,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "No. Word-based",
"sec_num": null
},
{
"text": "In this section, we evaluate our error correction model in both the artificial data and the real data. As we described in Section 5, the generation of the artificial data is based on 309 error templates. It suggests that the error types in the artificial test data are relatively more overlapped to the training data, compared to the real situation. For this reason, we perform the experiment with 300 real error data, which contain more unseen error types. The results can fairly reflect the generalization ability of our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6.3"
},
{
"text": "As evaluation metrics, we use precision, recall and F 1 -score based on words and characters. Table 7 shows the final experimental results of grammatical error correction tested both on artificial error data and real error data. According to the results, the character-based method achieved much higher F-score than the word-based meth-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6.3"
},
{
"text": "\u3093 \u884c \u307e \u304d \u884c \u304f \u307e \u305b \u3093 ENCODER \u884c \u304d \u307e \u305b < GO > DECODER \u3093 \u305b < EOS >",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6.3"
},
{
"text": "od both on artificial error data and real error data, indicating that the character-based neural sequence-to-sequence model is more effective than the word-based neural sequence-to-sequence model. When using the character-based method, we also got a higher F-score tested on the artificial error data than real error data. As expected, the real test data results are lower than the artificial test data. The real test data contains more unknown error types, which provides a more practical and meaningful evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6.3"
},
{
"text": "Some examples of system results tested on real data are shown in Table 8 . On primary cause of deterioration of F 1 -score using the word-based method is that the system wrongly corrected the neighboring words into other words, such as examples 1-4 and examples 6-8 in Table 8, although the system was able to correct Japanese functional expressions. Similarly, the errors occurred when using the characterbased method, such as examples 6 and 8 in Ta-ble8.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "6.4"
},
{
"text": "Additionally, the failure of detecting grammatical errors also caused errors, such as the example 5 when using the word-based method and example 7 when using the character-based method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "6.4"
},
{
"text": "In this paper, we define a new task of correcting spelling and grammatical errors on Japanese functional expressions. Our BiLSTM-CRF model can precisely recognize Japanese functional expressions and their neighboring words as the correction targets. Considering the real error data is insufficient, we generated artificial error data via substitution, injection, deletion of characters in correct data. To do error correction, we utilized neural machine translation, to train a wordbased sequence-to-sequence model and a character-based sequence-to-sequence model, respectively. Experimental results indicated that the character-based method achieved much higher Fscore than the word-based method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "In the future, we plan to extract more neighboring words of Japanese functional expressions to correct more errors, especially Japanese functional expressions with two or more meanings and usages, which we did not handle in this paper. Moreover, we want to apply the artificial error data to generate multiple-choice questions for JSL learners in a Japanese functional expression learning system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "https://tatoeba.org/eng/ 3 http://www.hiraganatimes.com/ 4 http://pj.ninjal.ac.jp/corpus_center/bccwj/en/ pers, business reports, blogs, internet forums, textbooks, and legal documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/Determined22/zh-NER-TF 6 http://taku910.github.io/mecab/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the reviewers for their valuable comments and suggestions that further improved our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Correcting ESL errors using phrasal SMT techniques",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Brockett, William B. Dolan, and Michael Gamon. 2006. Correcting ESL errors using phrasal SMT techniques. In Proceedings of the 21st International Conference on Compu- tational Linguistics and 44th Annual Meeting of the ACL, pages 249-256.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A study of language modeling for Chinese spelling check",
"authors": [
{
"first": "Hung-Shin",
"middle": [],
"last": "Kuan-Yu Chen",
"suffix": ""
},
{
"first": "Chung-Han",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hsin-Min",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 6th International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuan-Yu Chen, Hung-Shin Lee, Chung-Han Lee, Hsin-Min Wang and Hsin-Hsi Chen. 2013. A study of language modeling for Chi- nese spelling check. In Proceedings of the 6th International Conference on Natural Lan- guage Processing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation",
"authors": [
{
"first": "Kyunghyum",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyum Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724- 1734.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural Network Translation Models for Grammatical Error Correction",
"authors": [
{
"first": "Shamil",
"middle": [],
"last": "Chollampatt",
"suffix": ""
},
{
"first": "Kaveh",
"middle": [],
"last": "Taghipour",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceeding of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)",
"volume": "",
"issue": "",
"pages": "2768--2774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shamil Chollampatt, Kaveh Taghipour, and Hwee Tou Ng. 2016. Neural Network Transla- tion Models for Grammatical Error Correc- tion. In Proceeding of the Twenty-Fifth Inter- national Joint Conference on Artificial Intelli- gence (IJCAI-16), pages 2768-2774.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Helping Our Own: The HOO 2011 Pilot Shared Task",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "242--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Dale and Adam Kilgarriff. 2011. Helping Our Own: The HOO 2011 Pilot Shared Task. In Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation, pages 242-249.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "HOO 2012: A Report on the Preposition and Determiner Error Correction Shared Task",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Anisimoff",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Narroway",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 7th Workshop on the Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "54--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Dale, Ilya Anisimoff, and George Nar- roway. 2012. HOO 2012: A Report on the Preposition and Determiner Error Correction Shared Task. In Proceedings of the 7th Work- shop on the Innovative Use of NLP for Build- ing Educational Applications, pages 54-62.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Tree-to-Sequence Attentional Neural Machine Translation",
"authors": [
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Yoshisama",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL'2016)",
"volume": "",
"issue": "",
"pages": "823--833",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akiko Eriguchi, Kazuma Hashimoto and Yo- shisama Tsuruoka. 2016. Tree-to-Sequence Attentional Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL'2016), pages 823-833.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Pacific Asia Conference on Language, Information and Computation Hong Kong",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Convolutional Encoder Model for Neural Machine Translation",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL'2017)",
"volume": "",
"issue": "",
"pages": "123--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier and Yann N. Dauphin. 2017. A Convolutional Encoder Model for Neural Machine Transla- tion. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Lin- guistics (ACL'2017), pages 123-125.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bidirectional lstm-crf models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bi- directional lstm-crf models for sequence tag- ging. arXiv preprint arXiv:1508.01991.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Particle Error Correction from Small Error Data for Japanese Learners",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Imamura",
"suffix": ""
},
{
"first": "Kuniko",
"middle": [],
"last": "Saito",
"suffix": ""
},
{
"first": "Kugatsu",
"middle": [],
"last": "Sadamitsu",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Nishikawa",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Natural Language Processing",
"volume": "21",
"issue": "4",
"pages": "941--963",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Imamura, Kuniko Saito, Kugatsu Sadam- itsu and Hitoshi Nishikawa. 2014. Particle Er- ror Correction from Small Error Data for Jap- anese Learners. Journal of Natural Language Processing 21(4), pages 941-963.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generating Artificial Error Data for Indonesian Preposition Error Corrections",
"authors": [
{
"first": "Budi",
"middle": [],
"last": "Irmawati",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2017,
"venue": "International Journal of Technology",
"volume": "3",
"issue": "",
"pages": "549--558",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Budi Irmawati, Hiroyuki Shindo and Yuji Matsumoto. 2017. Generating Artificial Error Data for Indonesian Preposition Error Correc- tions. International Journal of Technology (2017)3, pages:549-558.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Chubunban Nihongo Kukei Jiten-Nihongo Bunkei Jiten",
"authors": [
{
"first": "Group",
"middle": [],
"last": "Jamashi",
"suffix": ""
},
{
"first": "Yiping",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Group Jamashi and Yiping Xu. 2001. Chubunban Nihongo Kukei Jiten-Nihongo Bunkei Jiten (in Chinese and Japanese). Tokyo: Kurosio Publishers.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Charner: Character-level named entity recognition",
"authors": [
{
"first": "Onur",
"middle": [],
"last": "Kuru",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Arkan Ozan Can",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2016,
"venue": "proceeding of the 26th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "911--921",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Onur Kuru, Arkan Ozan Can, and Deniz Yuret. 2016. Charner: Character-level named entity recognition. In proceeding of the 26th Interna- tional Conference on Computational Linguis- tics, pages 911-921.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Shared Task for Chinese Grammatical Error Diagnosis",
"authors": [],
"year": null,
"venue": "Proceedings of the Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'16)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shared Task for Chinese Grammatical Error Diagnosis. In Proceedings of the Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'16), pag- es1-6.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Overview of the NLP-TEA2015 shared task for Chinese grammatical error diagnosis",
"authors": [
{
"first": "Lung-Hao",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Liang-Chih",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Li-Ping",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Natural Language Processing Techniquesfor Educational Applications (NLP-TEA'15)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lung-Hao Lee, Liang-Chih Yu, and Li-Ping Chang. 2015. Overview of the NLP-TEA2015 shared task for Chinese grammatical error di- agnosis. In Proceedings of the 2nd Workshop on Natural Language Processing Tech- niquesfor Educational Applications (NLP- TEA'15), pages 1-6.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A Hybrid Ranking Approach to Chinese Spelling Check",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2015,
"venue": "ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)",
"volume": "14",
"issue": "14",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Fei Cheng, Kevin Duh and Yuji Matsumoto. 2015. A Hybrid Ranking Ap- proach to Chinese Spelling Check. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), Vol 14, No. 14, Article 16.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A hybrid Chinese spelling correction system using language model and statistical machine translation with remarking",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 6th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Fei Cheng, Yanyan Luo, Kevin Duh and Yuji Matsumoto. 2013. A hybrid Chinese spelling correction system using lan- guage model and statistical machine transla- tion with remarking. In Proceedings of the 6th International Joint Conference on Natural Language Processing, pages 54.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Addressing the Rare Word Problem in Neural Machine Translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "11--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Ad- dressing the Rare Word Problem in Neural Machine Translation. In Proceedings of the ACL-IJCNLP, pages 11-19.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Characterbased Bidirectional LSTM-CRF with words and characters for Japanese Named Entity Recognition",
"authors": [
{
"first": "Shotaro",
"middle": [],
"last": "Misawa",
"suffix": ""
},
{
"first": "Motoki",
"middle": [],
"last": "Taniguchi",
"suffix": ""
},
{
"first": "Yasuhide",
"middle": [],
"last": "Miura",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohkuma",
"suffix": ""
}
],
"year": 2017,
"venue": "proceeding of the First Workshop on Subword and Character Level Models in NLP",
"volume": "",
"issue": "",
"pages": "97--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shotaro Misawa, Motoki Taniguchi, Yasuhide Miura and Tomoko Ohkuma. 2017. Character- based Bidirectional LSTM-CRF with words and characters for Japanese Named Entity Recognition. In proceeding of the First Work- shop on Subword and Character Level Models in NLP, pages 97-102.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Mining revision log of language learning SNS for automated Japanese error correction of second language learners",
"authors": [
{
"first": "Tomoya",
"middle": [],
"last": "Mizumoto",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomoya Mizumoto, Mamoru Komachi, Masaaki Nagata, and Yuji Matsumoto. 2011. Mining revision log of language learning SNS for au- tomated Japanese error correction of second language learners. In Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 147-155.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Grammatical Error Correction Considering Multi-Word Expressions",
"authors": [
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "82--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuji Matsu- moto. 2015. Grammatical Error Correction Considering Multi-Word Expressions. In Pro- ceedings of the 2nd Workshop on Natural Language Processing Techniques for Educa- tional Applications. pages 82-86.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The CoNLL-2014 shared task on grammatical error correction",
"authors": [
{
"first": "",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Mei",
"middle": [],
"last": "Siew",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"Hendy"
],
"last": "Hadiwinoto",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Susanto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 18th Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Su- santo, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical er- ror correction. In Proceedings of the 18th Conference on Computational Natural Lan- guage Learning: Shared Task, pages 1-14.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The CoNLL-2013 shared task on grammatical error correction",
"authors": [
{
"first": "",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Mei",
"middle": [],
"last": "Siew",
"suffix": ""
},
{
"first": "Yuanbin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Hadiwinoto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Yuanbin Wu, Christian Hadiwinoto, and Joel Tetreault. 2013. The CoNLL-2013 shared task on grammatical error correction. In Proceedings of the Seventeenth Conference on Computa- tional Natural Language Learning: Shared Task, pages 1-12.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Judgment of Incorrect Usage of Particles in Documents for System Development Written by Non-native Japanese Speakers",
"authors": [
{
"first": "Megumi",
"middle": [],
"last": "Ohki",
"suffix": ""
},
{
"first": "Hiromi",
"middle": [],
"last": "Oyama",
"suffix": ""
},
{
"first": "Kei",
"middle": [],
"last": "Kitauchi",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Suenaga",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 17th Annual Meeting of the Association for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1047--1050",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Megumi Ohki, Hiromi Oyama., Kei Kitauchi, Takashi Suenaga, and Yuji Matsumoto. 2011. Judgment of Incorrect Usage of Particles in Documents for System Development Written by Non-native Japanese Speakers. In Proceed- ings of the 17th Annual Meeting of the Associ- ation for Natural Language Processing, pages 1047-1050.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Pacific Asia Conference on Language, Information and Computation Hong Kong",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Automatic Error Detection Method for Japanese Particles",
"authors": [
{
"first": "Horomi",
"middle": [],
"last": "Oyama",
"suffix": ""
}
],
"year": 2010,
"venue": "Polyglossia",
"volume": "1",
"issue": "",
"pages": "55--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Horomi Oyama. 2010. Automatic Error Detec- tion Method for Japanese Particles. Poly- glossia Vol.1,55-63, pages 55-63.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Overview of NLPTEA-2018 Share Task Chinese Grammatical Error Diagnosis",
"authors": [
{
"first": "Gaoqi",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Baolin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Endong",
"middle": [],
"last": "Xun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceeding of the 5th Workshop on Natural Language Processing Techniques for Educational Applications",
"volume": "",
"issue": "",
"pages": "42--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaoqi Rao, Qi Gong, Baolin Zhang, Endong Xun. 2018. Overview of NLPTEA-2018 Share Task Chinese Grammatical Error Diagnosis. In Proceeding of the 5th Workshop on Natural Language Processing Techniques for Educa- tional Applications, pages 42-51.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Chinese Grammatical Error Diagnosis",
"authors": [
{
"first": "Gaoqi",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Baolin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Endong",
"middle": [],
"last": "Xun",
"suffix": ""
},
{
"first": "Lung-Hao",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017) Shared tasks",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaoqi Rao, Baolin Zhang, Endong Xun, Lung- Hao Lee. 2017. Chinese Grammatical Error Diagnosis. In Proceedings of the Workshop on NLP Techniques for Educational Applications, The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017) Shared tasks, pages 1-8.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Artificial Error Generation with Machine Translation and Syntactic Patterns",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "287--292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marek Rei, Mariano Felice, Zheng Yuan, Ted Briscoe. 2017. Artificial Error Generation with Machine Translation and Syntactic Patterns. In Proceedings of the 12th Workshop on Innova- tive Use of NLP for Building Educational Ap- plications, pages 287-292.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Automatic Detection of Japanese Compound Functional Expressions and its Application to Statistical Dependency Analysis",
"authors": [
{
"first": "Takao",
"middle": [],
"last": "Shime",
"suffix": ""
},
{
"first": "Masatoshi",
"middle": [],
"last": "Tsuchiya",
"suffix": ""
},
{
"first": "Suguru",
"middle": [],
"last": "Matsuyoshi",
"suffix": ""
},
{
"first": "Takehito",
"middle": [],
"last": "Utsuro",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sato",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Natural Language Processing",
"volume": "",
"issue": "14",
"pages": "167--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takao Shime, Masatoshi Tsuchiya, Suguru Ma- tsuyoshi, Takehito Utsuro, Satoshi Sato. 2007. Automatic Detection of Japanese Compound Functional Expressions and its Application to Statistical Dependency Analysis, Journal of Natural Language Processing, Vol (14). No.5, pages 167-196.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning phrase-based spelling error models from click through data",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Micol",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
}
],
"year": 2010,
"venue": "proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL'10)",
"volume": "",
"issue": "",
"pages": "266--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Sun, Jianfeng Gao, Daniel Micol and Chris Quirk. 2010. Learning phrase-based spelling error models from click through data. In pro- ceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL'10), pages 266-274.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Sequence to Sequence Learning with Neural Networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural In- formation Processing Systems, pages 3104- 3112.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Detecting Japanese Compound Functional Expressions using Canonical/Derivational Relation",
"authors": [
{
"first": "Takafumi",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Abe",
"suffix": ""
},
{
"first": "Itsuki",
"middle": [],
"last": "Toyota",
"suffix": ""
},
{
"first": "Takehito",
"middle": [],
"last": "Utsuro",
"suffix": ""
},
{
"first": "Suguru",
"middle": [],
"last": "Matsuyoshi",
"suffix": ""
},
{
"first": "Masatoshi",
"middle": [],
"last": "Tsuchiya",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takafumi Suzuki, Yusuke Abe, Itsuki Toyota, Takehito Utsuro, Suguru Matsuyoshi, Ma- satoshi Tsuchiya. 2012. Detecting Japanese Compound Functional Expressions using Ca- nonical/Derivational Relation, In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC- 2012).",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Chunking Japanese compound functional expressions by machine learning",
"authors": [
{
"first": "Masatoshi",
"middle": [],
"last": "Tsuchiya",
"suffix": ""
},
{
"first": "Takao",
"middle": [],
"last": "Shime",
"suffix": ""
},
{
"first": "Toshihiro",
"middle": [],
"last": "Takagi",
"suffix": ""
},
{
"first": "Takehito",
"middle": [],
"last": "Utsuro",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Suguru",
"middle": [],
"last": "Matsuyoshi",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Seiichi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Workshop on Multi-word-expressions in a Multilingual Context",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masatoshi Tsuchiya, Takao Shime, Toshihiro Takagi, Takehito Utsuro, Kiyotaka Uchimoto, Suguru Matsuyoshi, Satoshi Sato, Seiichi Nakagawa. 2006. Chunking Japanese com- pound functional expressions by machine learning, In Proceedings of the Workshop on Multi-word-expressions in a Multilingual Context. pages 25-32.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Detailed introduction of the New JLPT N1-N5 grammar",
"authors": [
{
"first": "Xiaoming",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Reika",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoming Xu and Reika. 2013. Detailed intro- duction of the New JLPT N1-N5 grammar. East China University of Science and Tech- nology Press.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Overview of grammatical error diagnosis for learning Chinese as foreign language",
"authors": [
{
"first": "Liang-Chih",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lung-Hao",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Li-Ping",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 1st Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'14)",
"volume": "",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang-Chih Yu, Lung-Hao Lee, and Li-Ping Chang. 2014. Overview of grammatical error diagnosis for learning Chinese as foreign lan- guage. In Proceedings of the 1st Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'14), pag- es 42-47.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Grammatical error correction using neural machine translation",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceeding of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "380--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Yuan and Ted Briscoe. 2016. Grammati- cal error correction using neural machine translation. In Proceeding of NAACL-HLT, 2016. pages 380-38.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "sentence to character level by attaching position labels from a tag set: {B, I, E, O, B-SP, I-SP, E-SP, O-SP}. Here, we have tag 'B' indicating the beginning position of a word, 'I' indicating the middle position of a word, 'E' indicating the end position of a word, 'O' indicating a single character word, 'BSP' indicating the beginning position of a Japanese functional expression, 'I-SP' indicating the middle position of a Japanese functional expression, 'E-SP' indicating the end position of a Japanese functional expression, 'O-SP' indicating a Japanese functional expression with a single character word.",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "The steps in artificial error generation",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "The basic architecture of sequence-to-sequence model",
"num": null,
"uris": null
},
"TABREF2": {
"content": "<table><tr><td>=</td><td/><td/><td/><td/></tr><tr><td>=</td><td/><td/><td/><td/></tr><tr><td>5 \u2212</td><td>=</td><td>2 *</td><td>+</td><td>*</td></tr><tr><td>Precision</td><td/><td>Recall</td><td/><td>F 1 -score</td></tr><tr><td>88.38%</td><td/><td>90.34%</td><td/><td>89.35%</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "demonstrates certain examples of Japanese functional expressions. For example, Japanese functional expressions in sentences No.1 and No.2 were correctly identified, while the system wrongly identified content words as a Japanese functional expression in sentence No.3. The final experimental result is shown inTable 4."
},
"TABREF3": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Experimental result of Detection of Japanese functional expressions"
},
"TABREF5": {
"content": "<table><tr><td>PACLIC 32</td><td/><td/></tr><tr><td/><td>Native corpus</td><td/></tr><tr><td/><td>+</td><td/></tr><tr><td/><td>Learner corpus</td><td/></tr><tr><td/><td colspan=\"2\">BiLSTM-CRF</td></tr><tr><td/><td>Model</td><td/></tr><tr><td>\u30fb\u30fb\u30fb</td><td>Word + Japanese functional expression + Word</td><td>\u30fb\u30fb\u30fb</td></tr><tr><td/><td>Substitution</td><td/></tr><tr><td/><td>Injection</td><td/></tr><tr><td/><td>Deletion</td><td/></tr><tr><td/><td>Artificial Error Data</td><td/></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Extraction results of phrases of Japanese functional expressions. In the sentences, Japanese functional expressions are in bold."
}
}
}
}