ACL-OCL / Base_JSON /prefixY /json /Y18 /Y18-1042.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y18-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:35:42.237785Z"
},
"title": "Chinese Spelling Check based on Neural Machine Translation",
"authors": [
{
"first": "Chiao-Wen",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Applications National Tsing Hua University",
"location": {}
},
"email": "chiaowen@nlplab.cc"
},
{
"first": "Jhih-Jie",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {}
},
"email": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Tsing Hua University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a method for Chinese spelling check that automatically learns to correct a sentence with potential spelling errors. In our approach, a character-based neural machine translation (NMT) model is trained to translate a potentially misspelled sentence into correct one, using right-and-wrong sentence pairs from newspaper edit logs and artificially generated data. The method involves extracting sentences containing edits of spelling correction from edit logs, using commonly confused right-and-wrong word pairs to generate artificial right-and-wrong sentence pairs in order to expand our training data, and training the NMT model. The evaluation on the United Daily News (UDN) Edit Logs and SIGHAN-7 Shared Task shows that adding artificial error data significantly improves the performance of Chinese spelling check system.",
"pdf_parse": {
"paper_id": "Y18-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a method for Chinese spelling check that automatically learns to correct a sentence with potential spelling errors. In our approach, a character-based neural machine translation (NMT) model is trained to translate a potentially misspelled sentence into correct one, using right-and-wrong sentence pairs from newspaper edit logs and artificially generated data. The method involves extracting sentences containing edits of spelling correction from edit logs, using commonly confused right-and-wrong word pairs to generate artificial right-and-wrong sentence pairs in order to expand our training data, and training the NMT model. The evaluation on the United Daily News (UDN) Edit Logs and SIGHAN-7 Shared Task shows that adding artificial error data significantly improves the performance of Chinese spelling check system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Spelling check is a common yet important task in natural language processing. It plays an important role in a wide range of applications such as word processors, assisted writing systems, and search engines. For example, Web search engines such as Google and Bing typically perform spelling check on queries, in order to retrieve documents better meeting the user's information need. In contrast to Web search engines, while Microsoft Word has a very effective spelling checker for English, there is still considerable room to improve one for Chinese. Compared to Western languages (e.g., English and German), relatively little work has been done on Chinese spelling check because there are more challenges such as unclear word boundaries and massive characters in Chinese. Moreover, lack of training data hinders the development of Chinese spelling check.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Consider the sentence \"\u4ed6 \u5728 \u6587 \u5b78 \u65b9 \u9762 \u6709 \u5f88 \u9ad8 \u7684\u9020\u65e8\u3002\", in which the character \"\u65e8\" is a typo. For another sentence \"\u4ed6\u5728\u6587\u5b78\u65b9\u9762\u6709\u5f88\u9ad8\u7684\u9020 \u85dd\u3002\", the character \"\u85dd\" is also a typo. The correct character of the two typos should be \"\u8a63\". Chinese spelling errors often stem from two main reasons: one is similar sound (e.g., *\u85dd and \u8a63) and the other is similar shape (e.g., *\u65e8 and \u8a63), as pointed by Liu et al. (2011) .",
"cite_spans": [
{
"start": 372,
"end": 389,
"text": "Liu et al. (2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Intuitively, a spelling error can be automatically corrected more precisely with machine learning models trained on more data. Unfortunately, there is only limited training data of spelling correction in Chinese, and thus it is no easy to train an NMT model achieving good performance of Chinese spelling check. Therefore, we want to improve the spelling check model by generating artificial errors to increase training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a new system, AccuSpell, that automatically learns to generate the corrected sentence for a potentially misspelled sentence using neural machine translation (NMT) model. An example of AccuSpell checking for the sentence \"\u5728\u6211 \u5011\u7684\u751f\u547d\u4e2d\uff0c\u5e38\u6703\u78b0\u5230\u4e00\u4e9b\u63aa\u6298\u548c\u5931\u6557\u3002\" is shown in Figure 1 . AccuSpell learns how to effectively correct a given sentence by training on more data, including real edit logs and artificially generated data. We will describe how to generate artificial data and the training process in detail in Section 3. The rest of the article is organized as follows. We review the related work in the next section. Then we describe how to extract misspelled sentences from edit logs and how to generate artificial sentences with typos in Section 3. We also present our method for automatically learning to correct typos in a given sentence. Section 4 describes resources and datasets we used in the experiment. In our evaluation, over two test sets, we compare the performance of several models in Section 5. Finally, we summarize and point out the future work in Section 6.",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 290,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Error Correction has been an area of active research, which involves Grammatical Error Correction (GEC) and Spelling Error Correction (SEC). Recently, researchers have begun applying neural machine translation models to both GEC and SEC, and gained significant improvement (e.g., Yuan and Briscoe (2016) and Xie et al. (2016) ). However, compared to English, relatively little work has been done on Chinese error correction. In our work, we address the Chinese spelling error correction task on text written by native speakers, and an improved model by generating artificial typos.",
"cite_spans": [
{
"start": 280,
"end": 303,
"text": "Yuan and Briscoe (2016)",
"ref_id": "BIBREF16"
},
{
"start": 308,
"end": 325,
"text": "Xie et al. (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Early works on Chinese spelling check typically focused on rule-based and statistical approaches. Rule-based approaches usually use dictionary to identify typos and confusion set to find possible corrections, while statistical methods use the noisy channel model to find candidates of correction for a typo, and language model to calculate the likelihood of the corrected sentences. Chang (1995) proposed an approach that integrates both rule-based method and statistical method to automatically correct Chi-nese spelling errors. The approach involves a confusing character substitution mechanism and bigram language model. Later, Zhang et al. (2000) pointed out that the method proposed by Chang (1995) only address character substitution errors, other kinds of errors such as deletion and insertion can not be handled. They proposed a similar approach using confusing word substitution and trigram language model to extend the method proposed by Chang (1995) .",
"cite_spans": [
{
"start": 383,
"end": 395,
"text": "Chang (1995)",
"ref_id": "BIBREF1"
},
{
"start": 631,
"end": 650,
"text": "Zhang et al. (2000)",
"ref_id": "BIBREF17"
},
{
"start": 691,
"end": 703,
"text": "Chang (1995)",
"ref_id": "BIBREF1"
},
{
"start": 948,
"end": 960,
"text": "Chang (1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In recent years, Statistical Machine Translation (SMT) has been applied to Chinese spelling check. Wu et al. (2010) presented a system using a new error model and a common error template generation method to detect and correct Chinese character errors, which reduce the false alarm rate significantly. The idea of the error model is adopted from the noisy channel model, a framework of SMT, which is used in many NLP tasks such as spelling check and machine translation. Chiu et al. 2013proposed a datadriven method that detects and corrects Chinese errors based on phrasal statistical machine translation framework. They used word segmentation and dictionary to detect possible spelling errors, and correct the errors using SMT model built from a large corpus.",
"cite_spans": [
{
"start": 99,
"end": 115,
"text": "Wu et al. (2010)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "More recently, Neural Machine Translation (NMT) has been adopted in error correction task and achieved state-of-the-art performance. Yuan and Briscoe (2016) presented the very first NMT model for grammatical error correction of English. However, word-based NMT models usually suffer from rare word problem and infrequent words are substituted for a \"UNK\" token. Then, a character-based NMT approach was proposed by Xie et al. (2016) to avoid the problem of out-of-vocabulary words. Subsequently, Chollampatt and Ng (2018) proposed a multilayer convolutional encoder-decoder neural network to correct grammatical, orthographic, and collocation errors. Until now, most work on error correction using NMT model aimed at correction for English text. In contrast, we focus on correcting Chinese spelling errors.",
"cite_spans": [
{
"start": 133,
"end": 156,
"text": "Yuan and Briscoe (2016)",
"ref_id": "BIBREF16"
},
{
"start": 415,
"end": 432,
"text": "Xie et al. (2016)",
"ref_id": "BIBREF15"
},
{
"start": 496,
"end": 521,
"text": "Chollampatt and Ng (2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Building an error correction system using machine learning techniques typically requires a considerable amount of error-annotated data. Unfortunately, limited availability of error-annotated data is holding back progress in the area of automatic er-ror correction. Felice and Yuan (2014) presented a method of generating artificial errors for training, and improved NMT models for correcting mistakes made by English as a second language. Rei et al. (2017) investigated two alternative approaches for artificially generating all types of writing errors. They extracted error patterns from an annotated corpus and transplanting them into error-free text. In addition, they built a phrase-based SMT error generator to translate the grammatically correct text into incorrect one.",
"cite_spans": [
{
"start": 265,
"end": 287,
"text": "Felice and Yuan (2014)",
"ref_id": "BIBREF4"
},
{
"start": 439,
"end": 456,
"text": "Rei et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In a study closer to our work, Gu and Lang (2017) applied sequence-to-sequence (seq2seq) model to construct a word-based Chinese spelling error corrector. They established their own error corpus for training and evaluation by transplanting errors into an error-free news corpus. Comparing with traditional methods, their model can correct errors more effectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In contrast to the previous research in Chinese spelling check, we present a system that uses newspaper edit logs to train an NMT model for correcting typos in Chinese text. We also propose a method to generate artificial error data to enhance the NMT model. Additionally, to avoid rare word problem, our NMT model is trained at character level. The experiment results show that our model achieves significantly better performance, especially at a extremely low false alarm rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We focus on correcting Chinese spelling errors in a given sentence by formulating the spelling check as a machine translation problem. A sentence with typos is treated as the source sentence, which is translated into a target sentence with errors corrected. Thus, we train a neural machine translation (NMT) model on right-and-wrong sentence pairs extracted from newspaper edit logs. Unfortunately, the sentence pairs from newspaper edit logs are too small to train a good NMT model. To develop a more effective Chinese spelling check system, a promising approach is to automatically generate errors in presumably correct sentences for expanding the training data (Felice, 2016) , leading the system to cope with a wider variety of errors and contexts.",
"cite_spans": [
{
"start": 664,
"end": 678,
"text": "(Felice, 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "In our approach, we first extract the sentences with spelling errors from edit logs (Section 3.1) and generate artificial misspelled sentences from a set of error-free sentences (Section 3.2). We then use these data to train the NMT model (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "In this stage, we extract a set of sentences with spelling errors annotated by simple edit tags (i.e., \"[--]\" for deletion and \"{+ +}\" for insertion) from edit logs. For example, the sentence \"\u5e0c\u671b\u672a\u4f86\u4e3b\u8981\u5cf6 \u5dbc\u90fd\u6709\u5b8c\u5584\u7684[-\u99ac-]{+\u78bc+}\u982d\uff0c\" contains the edit tags \"[-\u99ac-]{+\u78bc+}\" that means the original character \"\u99ac\"was replaced with \"\u78bc\". The input to this stage are a set of edit logs in HTML format, containing the name of editor, the action of edit (1 is insertion and 3 is deletion), the target content and some CSS attributes, as shown in Figure 2 . We first convert HTML files to simple text files by removing HTML tags and using simple edit tags \"{+ +}\" and \"[--]\" to represent the edit actions of insertion and deletion respectively. For example, the sentence in HTML format \"\u5916 \u8cc7 \u4e5f \u4e0d \u6025 \u8457<FONT style=\"TEXT-DECORATION:",
"cite_spans": [],
"ref_spans": [
{
"start": 522,
"end": 530,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Extracting Misspelled Sentences from Edit Logs",
"sec_num": "3.1"
},
{
"text": "line- through\" class=3 title=XXX\u522a \u9664, color=#555588>\u4f48</FONT> <FONT class=1 title=XXX\u65b0 \u589e, color=#265e8a>\u5e03</FONT>\u5c40\u660e\u5e74\uff0c\" is converted to \"\u5916 \u8cc7 \u4e5f \u4e0d \u6025 \u8457[-\u4f48-]{+\u5e03+}\u5c40 \u660e \u5e74\uff0c\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Misspelled Sentences from Edit Logs",
"sec_num": "3.1"
},
{
"text": "After that, we attempt to extract the sentences that contain at least one typo. The edit logs could con- tain many kinds of edits, including spelling correction, content changes, and style modification (such as synonyms replacement). Among these edits, we are only concerned with spelling correction. However, lack of edit type annotation makes it difficult to directly identify spelling correction. Thus, we consider consecutive single-character edit pairs of deletion and insertion (e.g., \"[-\u4f48-]{+\u5e03+}\" or \"{+\u5e03+}[-\u4f48-]\") as spelling correction, and extract the sentences containing such edit pairs. Finally, we obtain a set of sentences with spelling errors annotated using simple edit tags, as shown in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 704,
"end": 712,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Extracting Misspelled Sentences from Edit Logs",
"sec_num": "3.1"
},
{
"text": "To make our Chinese spelling check system more effective, we create a set of artificial misspelled sentences for expanding our training data. The input to this stage are a set of presumably error-free sentences from news articles with word segmentation done using a word segmentation tool provided by the CKIP Project (Ma and Chen, 2003) . Artificially misspelled sentences are generated by injecting errors into these error-free sentences. Although a correct word could be misspelled as any other Chinese word, some right-and-wrong word pairs are more likely to happen than others. In order to generate realistic spelling errors, we use a confusion set consisting of commonly confused right-andwrong word pairs (see Table 1 ). The wrong words in confusion set are used to replace counterpart correct words in the sentences. For example, we use error-free sentence \"\u4e5f \u8ddf \u60a3\u8005 \u8ce0\u7f6a \u4e86 \u5341 \u5206\u9418\" to generate three misspelled sentences, as shown in Table 2 . The output of this stage is a set of rightand-wrong sentence pairs. Artificial Misspelled Sentence Wrong Word",
"cite_spans": [
{
"start": 326,
"end": 337,
"text": "Chen, 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 717,
"end": 724,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 936,
"end": 943,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Generating Artificially Misspelled Sentences",
"sec_num": "3.2"
},
{
"text": "\u4e5f \u8ddf \u60a3\u8005 \u57f9 \u57f9 \u57f9\u7f6a \u7f6a \u7f6a \u4e86 \u5341 \u5206\u9418 \uff1b \u57f9\u7f6a \u4e5f \u8ddf \u60a3\u8005 \u966a \u966a \u966a\u7f6a \u7f6a \u7f6a \u4e86 \u5341 \u5206\u9418 \uff1b \u966a\u7f6a \u4e5f \u8ddf \u60a3\u8005 \u8ce0\u7f6a \u4e86 \u5341 \u5206 \u5206 \u5206\u937e \u937e \u937e \uff1b \u5206\u937e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Artificially Misspelled Sentences",
"sec_num": "3.2"
},
{
"text": "The confusion set plays an important role in this stage, so it is critical to decide what kinds of confusion set to use. There are several available wordlevel and character-level confusion sets. However, compare to word-level, a Chinese character could be confused with more other characters based on shape and sound similarity. For example, the character \"\u8ce0\" is confused with 23 characters with similar shape and 21 characters with similar sound in a character-level confusion set, while the word \"\u8ce0 \u7f6a\" is confused with only two words in a word-level confusion set. Moreover, an occurring typo might involve not only the character itself but also the context. If we use the character-level confusion set, an error-free sentence would produce numerous and probably unrealistic artificial misspelled sentences. Therefore, we decide to use word-level confusion sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Artificially Misspelled Sentences",
"sec_num": "3.2"
},
{
"text": "We train a character-based neural machine translation (NMT) model for developing a Chinese spelling checker, which translates a potentially misspelled sentence into a correct one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "The architecture of NMT model typically consists of an encoder and a decoder. The encoder consumes the source sentence X = [x 1 , x 2 , . . . , x I ] and the decoder generates translated target sentence Y = [y 1 , y 2 , . . . , y J ]. For the task of correcting spelling errors, a potentially misspelled sentence is treated as the source sentence X, which is translated into the target sentence Y with errors cor-rected. To train the NMT model, we use a set of right-and-wrong sentence pairs from edit logs (Section 3.1) and artificially-generated data (Section 3.2) as target-and-source training sentence pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "In the training phase, the model is given (X, Y ) pairs. At encoding time, the encoder reads and transforms a source sentence X, which is projected to a sequence of embedding vectors e = [e 1 , e 2 , . . . , e I ], into a context vector c:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c = q(h 1 , h 2 , ..., h I )",
"eq_num": "(1)"
}
],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "where q is some nonlinear function. We use a bidirectional recurrent neural network (RNN) encoder to compute a sequence of hidden state vectors h = [h 1 , h 2 , ..., h I ]. The bidirectional RNN encoder consists of two independent encoders: a forward and a backward RNN. The forward RNN encodes the normal sequence, and the backward RNN encodes the reversed sequence. A hidden state vector h i at time i is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f h i = ForwardRNN(h i\u22121 , e i ) (2) bh i = BackwardRNN(h i+1 , e i ) (3) h i = [f h i ||bh i ]",
"eq_num": "(4)"
}
],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "where || denotes the vector concatenation operator. At decoding time, the decoder is trained to output a target sentence Y by predicting the next character y j based on the context vector c and all the previously predicted characters {y 1 , y 2 , . . . , y j\u22121 }:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(Y|X) = J j=1 p(y j |y 1 , y 2 , ..., y j\u22121 ; c)",
"eq_num": "(5)"
}
],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "The conditional probability is modeled as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "p(y j |y 1 , y 2 , ..., y j\u22121 ; c) = g(y j\u22121 , h j , c) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "where g is a nonlinear function, and h j is the hidden state vector of the RNN decoder at time j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "We use a attention-based RNN decoder that focuses on the most relevant information in the source sentence rather than the entire source sentence. Thus, the conditional probability in Equation 5 is redefined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y j |y 1 , y 2 , ..., y j\u22121 ; e) = g(y j\u22121 , h j , c j )",
"eq_num": "(7)"
}
],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "where the hidden state vector h j is computed as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h j = f (y j\u22121 , h j\u22121 , c j ) (8) c j = I i=1 a ji h i (9) a ji = exp(score(h j , h i )) I i =1 exp(score(h j , h i ))",
"eq_num": "(10)"
}
],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "Unlike Equation 6, here the probability is conditioned on a different context vector c j for each target character y j . The context vector c j follows the same computation as in Bahdanau et al. (2014) . We use the global attention approach with general score function to compute the attention weight a ji :",
"cite_spans": [
{
"start": 179,
"end": 201,
"text": "Bahdanau et al. (2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(h j , h i ) = h j T W a h i",
"eq_num": "(11)"
}
],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "Instead of implementing an NMT model from scratch, we use OpenNMT (Klein et al., 2017) , an open source toolkit for neural machine translation and sequence modeling, to train the model. The training details and hyper-parameters of our model will be described in Section 4.2",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Model",
"sec_num": "3.3"
},
{
"text": "In this section, we first give a brief description of the datasets used in the experiments in Section 4.1, and describe the hyper-parameters for the NMT model in Section 4.2. Then several NMT models with different experimental setting for comparing performance are described in Section 4.3. Finally in Section 4.4, we introduce the evaluation metrics for evaluating the performance of these models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "4"
},
{
"text": "United Daily News (UDN) Edit Logs: UDN Edit Logs was provided to us by UDN Digital. This dataset records the editing actions of daily UDN news from June 2016 to January 2017. There are 1.07 million HTML files with more than 30 million edits of various types, with approximately 11 million insertions and 20 million deletions. We extracted a set of annotated sentences involving spelling error correction from this edit logs using the approach described in Section 3.1. To train on NMT model, we transformed every annotated sentence into a sourceand-target parallel sentence. For example, \"\u5916 \u8cc7 32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "Copyright 2018 by the authors -\u4f48-] {+\u5e03+}\u5c40\u660e\u5e74\uff0c\" is transformed into a source sentence \"\u5916 \u8cc7 \u4e5f \u4e0d \u6025 \u8457 \u4f48 \u5c40 \u660e \u5e74\uff0c\" and a target sentence \"\u5916\u8cc7\u4e5f\u4e0d\u6025\u8457\u5e03\u5c40 \u660e\u5e74\uff0c\". In total, there are 238,585 sentences extracted from UDN Edit Logs, and each sentence contains only edits related to spelling errors. We divided these extracted sentences into two parts: one (226,913 sentences) for training NMT models, and the other (11,943 sentences) for evaluation in our experiments.",
"cite_spans": [
{
"start": 30,
"end": 34,
"text": "-\u4f48-]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "United Daily News (UDN): The UDN news dataset was also provided by UDN Digital. This dataset consists of published newswire data from 2004 to 2017, which contains approximately 1.8 million news articles with over 530 million words. Unlike UDN Edit Logs, UDN are composed of news articles which had been edited and published. We used the presumably error-free sentences in this dataset to generate artificially misspelled sentences, as described in Section 3.2. Confusion Set: We collect five confusion sets from online and print publications: \u806f\u5408\u5831\u7d71\u4e00\u7528\u5b57, \u6771 \u6771\u932f\u5225\u5b57, \u65b0\u7de8\u5e38\u7528\u932f\u5225\u5b57\u9580\u8a3a (\u8521\u6709\u79e9, 2003) , \u5e38\u898b\u932f\u5225\u5b57\u8fa8\u6b63\u8fad\u5178(\u8521\u69ae\u5733, 2012), and \u570b\u4e2d \u932f \u5b57 \u8868. The confused word pairs of five confusion sets (see Table 3 ) are combined into a collection with over 40,000 word pairs. However, for a given confused word pair, the judgments in different confusion sets might be inconsistent. Consider a confused word pair [\"\u9418\u9336\", \"\u9418\u8868\"]. \"\u9418\u9336\" is right and \"\u9418\u8868\" is wrong in \u6771\u6771\u932f\u5225\u5b57, while \"\u9418\u8868\" is adopted and \"\u9418\u9336\" is not recommended in \u806f \u5408 \u5831 \u7d71 \u4e00 \u7528 \u5b57. Furthermore, the confusion sets are not guaranteed to be absolutely correct. To resolve these problems, we used the Chinese dictionary published by Ministry of Education of Tai-wan as the gold standard. After filtering out the invalid word pairs, 33,551 distinct commonly confused word pairs were obtained. Test Data: We used two test sets for evaluation:",
"cite_spans": [
{
"start": 570,
"end": 581,
"text": "(\u8521\u6709\u79e9, 2003)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 672,
"end": 679,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "\u2022 UDN Edit Logs: As mentioned earlier, UDN Edit Logs were partitioned into two independent parts, for training and testing respectively. The test part contains 11,943 sentences and we only used 1,175 sentences for evaluation, 919 out of which contain at least one error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "\u2022 SIGHAN-7: We also used the dataset provided by SIGHAN 7 Bake-off 2013 ( ). This dataset contains two subtasks: Subtask 1 is for error detection and Subtask 2 is for error correction. In our work, we focus on evaluating error correction, so we used Subtask 2 as an additional test set. There are 1,000 sentences with spelling errors in Subtask 2, and the average length of sentences is approximately 70 characters. To evaluate the false alarm rate of our system, we segmented these sentences into 6,101 clauses, and 1,222 of which contain at least one error, and the remainder are errorfree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "We trained several models using the same hyperparameters in our experiments. For all models, the source and target vocabulary sizes are limited to 10K since the models are trained at character level. For source and target characters, the character embedding vector size is set to 500. We trained the models with sequences length up to 50 characters for both source and target sentences. The encoder is a 2-layer bidirectional long-short term memory (LSTM) networks, which consists of a forward LSTM and a backward LSTM, and the decoder is also a 2-layer LSTM. Both the encoder and the decoder have 500 hidden units. We use the Adam Algorithm (Kingma and Ba, 2014) as the optimization method to train our models with learning rate 0.001, and the maximum gradient norm is set to 5. Once a model is trained, beam search with beam size set to 5 is used to find a translation that approximately maximizes the probability. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameters of NMT Model",
"sec_num": "4.2"
},
{
"text": "We use the training part of UDN Edit Logs and the artificially generated misspelled sentences as the training data. To investigate whether the artificially generated data improves the performance of our Chinese spelling check model, we compared the results produced by models trained on different combinations of UDN Edit Logs and artificially generated data. In addition, we use the pronunciation and shape of a character as additional features for both the source and target sides to train another model. For example, for the character \"\u8a63\", the pronunciation feature is \"\u3127\" (without considering the tone) and the shape features are \"\u8a00\" and \"\u65e8\". There are totally seven models trained for comparing, and only last one was trained with features, as shown in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 758,
"end": 765,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Models Compared",
"sec_num": "4.3"
},
{
"text": "We use the metrics provided by SIGHAN-8 Bakeoff 2015 for Chinese spelling check shared task (Tseng et al., 2015) , which include False Positive Rate (FPR), Accuracy, Precision, Recall, and F1, to evaluate our systems. Table 5 shows the evaluation results of the two test sets we used. For UDN Edit Logs test set, as we can see, all models trained on edit logs plus artificially generated data perform better than the one trained on only edit logs. Moreover, UDN-only performs slightly worse, while ART-only performs the worst on all metrics. Though the model trained with sound and shape features has a relatively bad FPR, it has the best performance on accuracy, precision, recall, and F1 score. For the other test set, SIGHAN-7, UDN+ART (1:4) performs substantially better than the other models, noticeably improving on all metrics. Interestingly, in contrast to the results of UDN Edit Logs, the model trained on only edit logs has the worst performance, while the model trained on only artificially generated data performs reasonably well. We note that there is no obvious improvement in the performance of the model trained with sound and shape features except the recall.",
"cite_spans": [
{
"start": 92,
"end": 112,
"text": "(Tseng et al., 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 218,
"end": 225,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "In general, our systems obtain lower average FPRs on the two test sets. There are two phenomena worth mentioning. First, the model trained on only edit logs (UDN-only) performs well on UDN Edit Logs but very poorly on SIGHAN-7. In contrast, the model trained on only artificially generated data (ART-only) has worst performance on UDN Edit Logs but acceptable performance on SIGHAN-7 Second, it is worth noting that the model trained with sound and shape features has significantly better accuracy, recall, and F1 score on UDN Edit Logs. However, on SIGHAN-7, only the recall is a little better than the model trained without using features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Besides the test data, we also found that the model trained with additional features could correct some new and unseen errors. For example, the sentence \"\u4ed6\u5728\u6587\u5b78\u65b9\u9762\u6709\u5f88\u9ad8\u7684\u9020\u916f\u3002\" with a typo \"\u916f\", which is not corrected by a model trained without features probably because of the non-existence in the training data. However, the sentence is translated correctly into \"\u4ed6\u5728\u6587\u5b78\u65b9\u9762\u6709\u5f88\u9ad8\u7684\u9020\u8a63\u3002\" by the model trained with additional sound and shape features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Moreover, to prove that NMT-based method performs better than traditional methods, we compare the evaluation results of our NMT models with dictionary-based models, as shown in Table 6 . The UDN dictionary contains a set of right-and-wrong word pairs from the training part of UDN Edit Logs, and the CONF dictionary is the confusion set we used to generate artificial error data. We use the dictionaries to correct errors directly. Specifically, we search errors in text and replace them with counter- parts in the dictionaries. As we can see, the NMTbased models have higher recall than the dictionarybased models.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 184,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "In summary, we have proposed a novel method for learning to correct typos in Chinese text. The method involves combining real edit logs and artificially generated errors to train a NMT model that translates a potentially erroneous sentence into correct one. The results prove that adding artificially generated data successfully improves the overall performance of error correction. We also found that some unseen errors might be corrected using NMT model. Many avenues exist for future research and improvement of our system. For example, the method for extracting misspelled sentences from newspaper edit logs could be improved. When extracting, we only consider the sentences contain consecutive single-character edit pairs. However, twocharacter edit pairs could also involve spelling correction. Moreover, we could investigate how to use character-level confusion sets to expand the scale of confused word pairs. If we have more possibly confused word pairs, we could generate more comprehensive artificial error data. Additionally, an interesting direction to explore is expanding the scope of error correction to include grammatical errors. Yet another direction of research would be to consider focusing on implementing the neural machine translation model for Chinese spelling check. In our work, we pay more attention to the aspect of data, so relatively less experiments were done for tuning parameters of model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A new approach for automatic chinese spelling correction",
"authors": [
{
"first": "Chao-Huang",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of Natural Language Processing Pacific Rim Symposium",
"volume": "95",
"issue": "",
"pages": "278--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao-Huang Chang. 1995. A new approach for auto- matic chinese spelling correction. In Proceedings of Natural Language Processing Pacific Rim Symposium, volume 95, pages 278-283. Citeseer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Chinese spelling checker based on statistical machine translation",
"authors": [
{
"first": "Jian-Cheng",
"middle": [],
"last": "Hsun-Wen Chiu",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "49--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hsun-wen Chiu, Jian-cheng Wu, and Jason S Chang. 2013. Chinese spelling checker based on statistical machine translation. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 49-53.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A multilayer convolutional encoder-decoder neural network for grammatical error correction",
"authors": [
{
"first": "Shamil",
"middle": [],
"last": "Chollampatt",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.08831"
]
},
"num": null,
"urls": [],
"raw_text": "Shamil Chollampatt and Hwee Tou Ng. 2018. A multilayer convolutional encoder-decoder neural net- work for grammatical error correction. arXiv preprint arXiv:1801.08831.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generating artificial errors for grammatical error correction",
"authors": [
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Student Research Workshop at the 14th Conference of the EACL",
"volume": "",
"issue": "",
"pages": "116--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mariano Felice and Zheng Yuan. 2014. Generating arti- ficial errors for grammatical error correction. In Pro- ceedings of the Student Research Workshop at the 14th Conference of the EACL, pages 116-126.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Artificial error generation for translation-based grammatical error correction",
"authors": [
{
"first": "Mariano",
"middle": [
"Felice"
],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mariano Felice. 2016. Artificial error generation for translation-based grammatical error correction. Tech- nical report, University of Cambridge, Computer Lab- oratory.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A chinese text corrector based on seq2seq model",
"authors": [
{
"first": "Sunyan",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 2017,
"venue": "Cyber-Enabled Distributed Computing and Knowledge Discovery",
"volume": "",
"issue": "",
"pages": "322--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunyan Gu and Fei Lang. 2017. A chinese text correc- tor based on seq2seq model. In Cyber-Enabled Dis- tributed Computing and Knowledge Discovery (Cy- berC), 2017 International Conference on, pages 322- 325. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Opennmt: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. Opennmt: Open- source toolkit for neural machine translation. Pro- ceedings of ACL 2017, System Demonstrations, pages 67-72.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Visually and phonologically similar characters in incorrect chinese words: Analyses, identification, and applications",
"authors": [
{
"first": "C-L",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M-H",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "K-W",
"middle": [],
"last": "Tien",
"suffix": ""
},
{
"first": "Y-H",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "S-H",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "C-Y",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM Transactions on Asian Language Information Processing (TALIP)",
"volume": "10",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C-L Liu, M-H Lai, K-W Tien, Y-H Chuang, S-H Wu, and C-Y Lee. 2011. Visually and phonologically sim- ilar characters in incorrect chinese words: Analyses, identification, and applications. ACM Transactions on Asian Language Information Processing (TALIP), 10(2):10.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Introduction to ckip chinese word segmentation system for the first international chinese word segmentation bakeoff",
"authors": [
{
"first": "Wei-",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2nd SIGHAN on CLP",
"volume": "",
"issue": "",
"pages": "168--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Yun Ma and Keh-Jiann Chen. 2003. Introduction to ckip chinese word segmentation system for the first international chinese word segmentation bakeoff. In Proceedings of the 2nd SIGHAN on CLP, pages 168- 171.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Artificial error generation with machine translation and syntactic patterns",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "287--292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marek Rei, Mariano Felice, Zheng Yuan, and Ted Briscoe. 2017. Artificial error generation with ma- chine translation and syntactic patterns. In Proceed- ings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 287- 292.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Introduction to sighan 2015 bake-off for chinese spelling check",
"authors": [
{
"first": "",
"middle": [],
"last": "Yuen-Hsien",
"suffix": ""
},
{
"first": "Lung-Hao",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Li-Ping",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 8th SIGHAN Workshop on CLP",
"volume": "",
"issue": "",
"pages": "32--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuen-Hsien Tseng, Lung-Hao Lee, Li-Ping Chang, and Hsin-Hsi Chen. 2015. Introduction to sighan 2015 bake-off for chinese spelling check. In Proceedings of the 8th SIGHAN Workshop on CLP, pages 32-37.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Reducing the false alarm rate of chinese character error detection and correction",
"authors": [
{
"first": "Shih-Hung",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yong-Zhi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ping-Che",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tsun",
"middle": [],
"last": "Ku",
"suffix": ""
},
{
"first": "Chao-Lin",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2010,
"venue": "CIPS-SIGHAN Joint Conference on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shih-Hung Wu, Yong-Zhi Chen, Ping-Che Yang, Tsun Ku, and Chao-Lin Liu. 2010. Reducing the false alarm rate of chinese character error detection and cor- rection. In CIPS-SIGHAN Joint Conference on Chi- nese Language Processing.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Chinese spelling check evaluation at sighan bake-off 2013",
"authors": [
{
"first": "Shih-Hung",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Chao-Lin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lung-Hao",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "35--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shih-Hung Wu, Chao-Lin Liu, and Lung-Hao Lee. 2013. Chinese spelling check evaluation at sighan bake-off 2013. In Proceedings of the Seventh SIGHAN Work- shop on Chinese Language Processing, pages 35-42.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural language correction with character-based attention",
"authors": [
{
"first": "Ziang",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Anand",
"middle": [],
"last": "Avati",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.09727"
]
},
"num": null,
"urls": [],
"raw_text": "Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Juraf- sky, and Andrew Y Ng. 2016. Neural language cor- rection with character-based attention. arXiv preprint arXiv:1603.09727.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Grammatical error correction using neural machine translation",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 NAACL-HLT",
"volume": "",
"issue": "",
"pages": "380--386",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Yuan and Ted Briscoe. 2016. Grammatical error correction using neural machine translation. In Pro- ceedings of the 2016 NAACL-HLT, pages 380-386.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic detecting/correcting errors in chinese text by an approximate word-matching algorithm",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Changning",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Haihua",
"middle": [],
"last": "Pan",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "248--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Zhang, Changning Huang, Ming Zhou, and Haihua Pan. 2000. Automatic detecting/correcting errors in chinese text by an approximate word-matching algo- rithm. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 248-254. Association for Computational Linguistics. \u8521\u6709\u79e9. 2003. \u65b0\u7de8\u932f\u5225\u5b57\u9580\u8a3a. \u8a9e\u6587\u8a13\u7df4\u53e2\u66f8. \u87a2\u706b \u87f2.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "A screenshot of the system AccuSpell",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "An example of edit logs in HTML format",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Example outputs for the stage of extracting misspelled sentences",
"num": null
},
"TABREF0": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Examples of confusion setCorrect Word Wrong Words \u90e8\u7f72 \u5e03\u7f72, \u90e8\u8655, \u4f48\u7f72, \u6b65\u7f72 \u8ce0\u7f6a \u57f9\u7f6a, \u966a\u7f6a",
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Artificial misspelled sentences for \"\u4e5f \u8ddf \u60a3\u8005 \u8ce0\u7f6a \u4e86 \u5341 \u5206\u9418\"",
"num": null
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Confusion Set</td><td>Number of confused word pairs</td></tr><tr><td>\u806f\u5408\u5831\u7d71\u4e00\u7528\u5b57</td><td>1,056</td></tr><tr><td>\u6771\u6771\u932f\u5225\u5b57</td><td>38,125</td></tr><tr><td>\u65b0\u7de8\u5e38\u7528\u932f\u5225\u5b57\u9580\u8a3a</td><td>492</td></tr><tr><td>\u5e38\u898b\u932f\u5225\u5b57\u8fa8\u6b63\u8fad\u5178</td><td>601</td></tr><tr><td>\u570b\u4e2d\u932f\u5b57\u8868</td><td>1,460</td></tr><tr><td>\u4e5f\u4e0d\u6025\u8457[</td><td/></tr></table>",
"text": "Number of word pairs of five confusion sets",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">UDN Artificially</td></tr><tr><td>Model</td><td colspan=\"2\">Edit Generated</td></tr><tr><td/><td>Logs</td><td>Data</td></tr><tr><td>UDN-only</td><td>226,913</td><td>-</td></tr><tr><td>UDN+ART (1:1)</td><td>226,913</td><td>225,985</td></tr><tr><td>UDN+ART (1:2)</td><td>226,913</td><td>440,143</td></tr><tr><td>UDN+ART (1:3)</td><td>226,913</td><td>673,006</td></tr><tr><td>UDN+ART (1:4)</td><td>226,913</td><td>899,385</td></tr><tr><td>ART-only</td><td>-</td><td>899,385</td></tr><tr><td>FEAT-Sound&amp;Shape</td><td>226,913</td><td>673,006</td></tr><tr><td colspan=\"3\">* FEAT-Sound&amp;Shape is trained on the same data in</td></tr><tr><td>UDN+ART (1:3)</td><td/><td/></tr></table>",
"text": "The number of training sentences of the 7 models",
"num": null
},
"TABREF4": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Test Set</td><td>Model</td><td colspan=\"4\">FPR Accuracy Precision Recall F1</td></tr><tr><td/><td>UDN-only</td><td>.066</td><td>.64</td><td>.80</td><td>.64 .71</td></tr><tr><td/><td>UDN+ART (1:1)</td><td>.090</td><td>.69</td><td>.84</td><td>.69 .76</td></tr><tr><td/><td>UDN+ART (1:2)</td><td>.063</td><td>.71</td><td>.86</td><td>.72 .78</td></tr><tr><td>UDN Edit Logs</td><td>UDN+ART (1:3)</td><td>.066</td><td>.70</td><td>.86</td><td>.69 .76</td></tr><tr><td/><td>UDN+ART (1:4)</td><td>.059</td><td>.71</td><td>.87</td><td>.71 .78</td></tr><tr><td/><td>ART-only</td><td>.137</td><td>.35</td><td>.43</td><td>.26 .33</td></tr><tr><td/><td>FEAT-Sound&amp;Shape</td><td>.098</td><td>.72</td><td>.88</td><td>.72 .79</td></tr><tr><td/><td>UDN-only</td><td>.109</td><td>.74</td><td>.19</td><td>.17 .18</td></tr><tr><td/><td>UDN+ART (1:1)</td><td>.089</td><td>.83</td><td>.50</td><td>.59 .54</td></tr><tr><td/><td>UDN+ART (1:2)</td><td>.081</td><td>.84</td><td>.54</td><td>.61 .57</td></tr><tr><td>SIGHAN-7</td><td>UDN+ART (1:3)</td><td>.078</td><td>.85</td><td>.56</td><td>.62 .58</td></tr><tr><td/><td>UDN+ART (1:4)</td><td>.073</td><td>.85</td><td>.58</td><td>.63 .61</td></tr><tr><td/><td>ART-only</td><td>.079</td><td>.84</td><td>.53</td><td>.58 .56</td></tr><tr><td/><td>FEAT-Sound&amp;Shape</td><td>.097</td><td>.83</td><td>.51</td><td>.64 .57</td></tr></table>",
"text": "Evaluation on UDN Edit Logs and SIGHAN-7 test set",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Model</td><td colspan=\"2\">UDN Edit Logs SIGHAN-7</td></tr><tr><td>Dictionary-based U DN</td><td>.13</td><td>.04</td></tr><tr><td>UDN-only</td><td>.64</td><td>.17</td></tr><tr><td>Dictionary-based CON F</td><td>.19</td><td>.49</td></tr><tr><td>ART-only</td><td>.26</td><td>.58</td></tr><tr><td>Dictionary-based U DN +CON F</td><td>.19</td><td>.38</td></tr><tr><td>UDN+ART (1:4)</td><td>.71</td><td>.63</td></tr></table>",
"text": "Evaluation results (recall rate) of dictionary-based approach",
"num": null
}
}
}
}