ACL-OCL / Base_JSON /prefixR /json /rocling /2019.rocling-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:54:24.129976Z"
},
"title": "Sequence to Sequence Convolutional Neural Network for Automatic Spelling Correction",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hl\u00e1dek",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Communications Technical University of Ko\u0161ice",
"location": {
"country": "Slovakia"
}
},
"email": "daniel.hladek@tuke.sk"
},
{
"first": "Mat\u00fa\u0161",
"middle": [],
"last": "Pleva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Communications Technical University of Ko\u0161ice",
"location": {
"country": "Slovakia"
}
},
"email": "matus.pleva@tuke.sk"
},
{
"first": "J\u00e1n",
"middle": [],
"last": "Sta\u0161",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Communications Technical University of Ko\u0161ice",
"location": {
"country": "Slovakia"
}
},
"email": "jan.stas@tuke.sk"
},
{
"first": "Yuan-Fu",
"middle": [],
"last": "Liao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Taipei University of Technology",
"location": {}
},
"email": "yfliao@mail.ntut.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The paper proposes a system that compensates most of the noise in a text in natural language caused by technical imperfection of the input device such as keyboard or scanner with optical character recognition, quick typing, or writer incompetence. Correcting the spelling errors in the text improves the performance of the following natural language processing. The incorrect sequence of characters is transcribed into another sequence of correct characters by a neural network with encoder-decoder architecture. Our approach to automatic spelling correction considers characters in an erroneous sentence as words of the source languages. The neural network searches for the best sequence of output characters for the given input. The proposed approach for spelling correction does not require any or minimal amount of training data. Instead, the error model is expressed by a simple component that distorts unannotated data and creates any necessary quantity of training examples for a neural network. The experimental results show that the presented approach significantly improves the distorted data (from 50% WER to 0.09% WER) with distortion lower than 1.5% WER.",
"pdf_parse": {
"paper_id": "2019",
"_pdf_hash": "",
"abstract": [
{
"text": "The paper proposes a system that compensates most of the noise in a text in natural language caused by technical imperfection of the input device such as keyboard or scanner with optical character recognition, quick typing, or writer incompetence. Correcting the spelling errors in the text improves the performance of the following natural language processing. The incorrect sequence of characters is transcribed into another sequence of correct characters by a neural network with encoder-decoder architecture. Our approach to automatic spelling correction considers characters in an erroneous sentence as words of the source languages. The neural network searches for the best sequence of output characters for the given input. The proposed approach for spelling correction does not require any or minimal amount of training data. Instead, the error model is expressed by a simple component that distorts unannotated data and creates any necessary quantity of training examples for a neural network. The experimental results show that the presented approach significantly improves the distorted data (from 50% WER to 0.09% WER) with distortion lower than 1.5% WER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Written or scanned text is often not in the intended form. Writer or the input device often generate deviations that make it less understandable. The errors are usually not a problem in casual communication but make machine processing more complicated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Removal of spelling errors helps with the following processing of the text in natural language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Automatic spelling correction (ASC) is an essential part of the processing of the documents with noisy data in natural language. ASC helps to recover the intended canonical form of the text and improves the quality of the input data for the following natural language processing (NLP) components. It supports processing of digitized documents, automated proofreaders, or information retrieval systems (e.g TREC-5 confusion track [1] ). The main motivation for this work is an improvement of the training data for language model for speech recognition [2] .",
"cite_spans": [
{
"start": 429,
"end": 432,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 551,
"end": 554,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The task of automatic error-correction is to generate the most likely correct word-for ms given a misspelled word-form [3] .",
"cite_spans": [
{
"start": 119,
"end": 122,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State of the art",
"sec_num": "2."
},
{
"text": "Previous approaches to ASC, such as correcting spelling errors in the Chinese language [4] use classical statistical methods, such as the hidden Markov model, n-gram language model, loglinear regression, or forward-backward algorithm [5] .",
"cite_spans": [
{
"start": 87,
"end": 90,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 234,
"end": 237,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State of the art",
"sec_num": "2."
},
{
"text": "The usual form of mathematical formalism is a noisy channel proposed by Shannon [6] .",
"cite_spans": [
{
"start": 80,
"end": 83,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State of the art",
"sec_num": "2."
},
{
"text": "as finding the best correction candidate from a list of possible correction candidates \u2208 ( is a valid word dictionary) with the best unnormalized probability [7] :",
"cite_spans": [
{
"start": 158,
"end": 161,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shannon defines the ASC of a possibly incorrect word",
"sec_num": null
},
{
"text": "= max \u2208 ( ) ( | ( ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shannon defines the ASC of a possibly incorrect word",
"sec_num": null
},
{
"text": "The error model ( | ) estimates the probability of unknown string instead of real word . The error model characterizes the spelling correction problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shannon defines the ASC of a possibly incorrect word",
"sec_num": null
},
{
"text": "The context model ( ) calculated the probability of the correction candidate according to the surrounding words. A finite-state based system, such as Hunspell 1 proposes a list of correction candidates, and a language model helps to choose the best spelling correction candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shannon defines the ASC of a possibly incorrect word",
"sec_num": null
},
{
"text": "The task of spelling correction is similar to machine translation (MT). An ASC trans lates input sentence containing spelling errors into another sentence in a \"correct\" lang uage. Machine translation converts a sequence of words in the source language into another sequence of words in the target language. Formally, MT is the search for the best target seq uence T given source sequence S using model P [8] :",
"cite_spans": [
{
"start": 405,
"end": 408,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shannon defines the ASC of a possibly incorrect word",
"sec_num": null
},
{
"text": "= max ( | )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shannon defines the ASC of a possibly incorrect word",
"sec_num": null
},
{
"text": "There are a couple of approaches that used statistical MT for ASC before, such as machine translation spelling for historical texts [9] . [10] attempts to character-level spelling correction. Neural networks with encoder/decoder architecture brought significant improvement in the performance of SMT. Current deep neural networks [11] can consider a much bro ader context of words or characters. This ability allows us to use an architecture that is based only on neural networks and considers only characters. A neural model can be used to score any given pair of input and output sequences [12] .",
"cite_spans": [
{
"start": 132,
"end": 135,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 138,
"end": 142,
"text": "[10]",
"ref_id": "BIBREF9"
},
{
"start": 330,
"end": 334,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 592,
"end": 596,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shannon defines the ASC of a possibly incorrect word",
"sec_num": null
},
{
"text": "Sequence to sequence neural networks architecture transforms a sequence of symbols from the source language to another sequence of symbols in the target language. Sequences can have different lengths. One symbol is encoded into an n-dimensional binary vector with one dimension for each possible character. The embedding layer reduces the dimension of the input vector. The transformed input matrix has dimension equal to the embedding dimension and size of the sequence. The neural network that transcribes one sequence of symbols into another consists of the encoder and the decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence to sequence spelling correction",
"sec_num": "3."
},
{
"text": "\"The encoder maps a variable-length source sentence to a fixed-length vector, and the decoder maps the vector representation back to a variable-length sentence. The two ne tworks are trained jointly to maximize the conditional probability of the target sequen ce given source sequence.\". [12] Knowing the probability of the next symbol enables the decoder to sample probable sequences of symbols. \"Sequence to sequence\" systems usually use recurrent neural networks (RNN) or convolutional neural networks (CNN) for encoding and decoding.",
"cite_spans": [
{
"start": 288,
"end": 292,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence to sequence spelling correction",
"sec_num": "3."
},
{
"text": "\"The dominant approach to date encodes the input sequence with a series of bi-directional recurrent neural networks (RNN) and generates a variable-length output with another set of decoder RNNs, both of which interface via a soft-attention mechanism.\" [13] Recurrent neural networks perform well with tasks with variable-length input and output. ",
"cite_spans": [
{
"start": 252,
"end": 256,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence to sequence spelling correction",
"sec_num": "3."
},
{
"text": "The RNN always maintain a hidden state and updates it with each new item in the input sequence. Compared to RNN, the current state in the input sequence of a convolutional network does not depend on the previous, which makes the computation easier. The processor can compute convolution for the whole sequence at once.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed convolutional network architecture",
"sec_num": "4."
},
{
"text": "\"Convolutions create representations for fixed-size contexts; however, the effective context size of the network can easily be made larger by stacking several layers on top of each other.\" [13] The approach uses a convolutional sequence-to-sequence architecture by [13] . The convolutional architecture uses gated linear units (GLU) [16] with residual connections [17] .",
"cite_spans": [
{
"start": 189,
"end": 193,
"text": "[13]",
"ref_id": "BIBREF12"
},
{
"start": 265,
"end": 269,
"text": "[13]",
"ref_id": "BIBREF12"
},
{
"start": 333,
"end": 337,
"text": "[16]",
"ref_id": "BIBREF15"
},
{
"start": 364,
"end": 368,
"text": "[17]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed convolutional network architecture",
"sec_num": "4."
},
{
"text": "\"The attention mechanism looks at the input sequence and decides at each step which parts are important.\" The attention mechanism \"writes down\" quintessential keywords from the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed convolutional network architecture",
"sec_num": "4."
},
{
"text": "The attention-mechanism considers several other inputs at the same time and decides which ones are important by attributing different weights to those inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed convolutional network architecture",
"sec_num": "4."
},
{
"text": "The convolutional architecture was selected because recent results [13] show that they offer superior or comparable performance and higher speed of learning when compared to the moreestablished recurrent networks. ",
"cite_spans": [
{
"start": 67,
"end": 71,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed convolutional network architecture",
"sec_num": "4."
},
{
"text": "The proposed neural network needs a sufficiently large text in a natural language for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "We have composed a set of newspaper articles in the Slovak language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "One training sample for the neural network consists of three words. The \"clean\" text forms the target part of one sample. Table 2 summarizes the size of the text database.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "The error model distorts characters in the sample and creates the source part. The distorted and original sequence form a training pair. The neural network learns a function that is inverse to the one that generated the training data. shows measure of preliminary distortion of the testing set by the error model without any processing. Figure 1 displays the complete learning curve in CER for each training iteration.",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 345,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "Performance of the system is improved only slightly after tenth round of training. The second experiment measures how the trained neural network damages useful data. Input of the ASC system is a clean text. The Table 4 shows how the neural network distorts the clean data. The distortion of the clean data is very low (0,0022 CER) and decreases with number of training iterations. Distortion CER is marked in the Table 5 for each training round. It shows clear correlation with the learning curve in Figure 1 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 218,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 413,
"end": 420,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 500,
"end": 508,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "The experiments confirm that the proposed approach can remove most of the noise from a text in natural language. An expert can design the artificial error model according to the typical error patterns. It is possible to use statistical estimation with relatively small training data, e.g. a letter confusion matrix ( [5] , [18] ). Processing of the clean data has very low distortion and the proposed neural network can be used without damaging the clean data.",
"cite_spans": [
{
"start": 317,
"end": 320,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 323,
"end": 327,
"text": "[18]",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "http://hunspell.github.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The TREC-5 Confusion Track: Comparing Retrieval Methods for Scanned Text",
"authors": [
{
"first": "P",
"middle": [
"B"
],
"last": "Kantor",
"suffix": ""
},
{
"first": "E",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
}
],
"year": 2000,
"venue": "Inf. Retr. Boston",
"volume": "2",
"issue": "2",
"pages": "165--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. B. Kantor and E. M. Voorhees, \"The TREC-5 Confusion Track: Comparing Retrieval Methods for Scanned Text,\" Inf. Retr. Boston., vol. 2, no. 2, pp. 165-76, 2000.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Advances in the Slovak Judicial domain dictation system",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rusko",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "9561",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Rusko et al., Advances in the Slovak Judicial domain dictation system, vol. 9561. 2016.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "State-of-the-art in weighted finite-state spell-checking",
"authors": [
{
"first": "T",
"middle": [
"A"
],
"last": "Pirinen",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
}
],
"year": 2014,
"venue": "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
"volume": "8404",
"issue": "",
"pages": "519--532",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. A. Pirinen and K. Lind\u00e9n, \"State-of-the-art in weighted finite-state spell-checking,\" in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2014, vol. 8404 LNCS, no. PART 2, pp. 519-532.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Discriminative Reranking for Spelling Correction",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. 20th Pacific Asia Conf. Lang. Inf. Comput",
"volume": "",
"issue": "",
"pages": "64--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Zhang, P. He, W. Xiang, and M. Li, \"Discriminative Reranking for Spelling Correction,\" Proc. 20th Pacific Asia Conf. Lang. Inf. Comput., pp. 64-71, 2007.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning string distance with smoothing for OCR spelling correction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Hl\u00e1dek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sta\u0161",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ond\u00e1\u0161",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Juh\u00e1r",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Kov\u00e1cs",
"suffix": ""
}
],
"year": 2016,
"venue": "Multimed. Tools Appl",
"volume": "",
"issue": "",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Hl\u00e1dek, J. Sta\u0161, S. Ond\u00e1\u0161, J. Juh\u00e1r, and L. Kov\u00e1cs, \"Learning string distance with smoothing for OCR spelling correction,\" Multimed. Tools Appl., pp. 1-19, 2016.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A mathematical theory of communication",
"authors": [
{
"first": "C",
"middle": [
"E"
],
"last": "Shannon",
"suffix": ""
}
],
"year": 1948,
"venue": "Bell Syst. Tech. J",
"volume": "27",
"issue": "4",
"pages": "623--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. E. Shannon, \"A mathematical theory of communication,\" Bell Syst. Tech. J., vol. 27, no. 4, pp. 623-56, Oct. 1948.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An improved error model for noisy channel spelling correction",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Annual Meeting on Association for Computational Linguistics ACL 00",
"volume": "",
"issue": "",
"pages": "286--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Brill and R. C. Moore, \"An improved error model for noisy channel spelling correction,\" in Proceedings of the 38th Annual Meeting on Association for Computational Linguistics ACL 00, 2000, pp. 286-93.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Sequence to Sequence Learning with Neural Networks",
"authors": [
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Q",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Adv. Neural Inf. Process. Syst",
"volume": "27",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Sutskever, O. Vinyals, and Q. V. Le, \"Sequence to Sequence Learning with Neural Networks,\" Adv. Neural Inf. Process. Syst. 27 (NIPS 2014), pp. 3104-3112, 2014.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An Approach to Unsupervised Historical Text Normalisation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitankin",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gerdjikov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mihov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First International Conference on Digital Access to Textual Cultural Heritage -DATeCH '14",
"volume": "",
"issue": "",
"pages": "29--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Mitankin, S. Gerdjikov, and S. Mihov, \"An Approach to Unsupervised Historical Text Normalisation,\" in Proceedings of the First International Conference on Digital Access to Textual Cultural Heritage -DATeCH '14, 2014, pp. 29-34.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Diacritics restoration: learning from letters versus learning from words",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "2276",
"issue": "",
"pages": "96--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Mihalcea, \"Diacritics restoration: learning from letters versus learning from words,\" in CICLing 2002, vol. 2276, A. Gelbukh, Ed. Springer Berlin Heidelberg, 2002, pp. 96-113.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Spelling Correction Using Recurrent Neural Networks and Character Level N-gram",
"authors": [
{
"first": "A",
"middle": [
"C"
],
"last": "Kinaci",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 International Conference on Artificial Intelligence and Data Processing",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. C. Kinaci, \"Spelling Correction Using Recurrent Neural Networks and Character Level N-gram,\" in 2018 International Conference on Artificial Intelligence and Data Processing, IDAP 2018, 2019, pp. 1-4.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Cho et al., \"Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation,\" in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014, pp. 1724-1734.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Convolutional Sequence to Sequence Learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Y",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin, \"Convolutional Sequence to Sequence Learning,\" May 2017.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS 2014 Deep Learning and Representation Learning Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, \"Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling,\" in NIPS 2014 Deep Learning and Representation Learning Workshop, 2014.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Long Short-Term Memory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Hochreiter and J. Schmidhuber, \"Long Short-Term Memory,\" Neural Comput., vol. 9, no. 8, pp. 1735-1780, Nov. 1997.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Language Modeling with Gated Convolutional Networks",
"authors": [
{
"first": "Y",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier, \"Language Modeling with Gated Convolutional Networks,\" Dec. 2016.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "K",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. He, X. Zhang, S. Ren, and J. Sun, \"Deep residual learning for image recognition,\" in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning string-edit distance",
"authors": [
{
"first": "E",
"middle": [
"S"
],
"last": "Ristad",
"suffix": ""
},
{
"first": "P",
"middle": [
"N"
],
"last": "Yianilos",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Trans. Pattern Anal. Mach. Intell",
"volume": "20",
"issue": "5",
"pages": "522--554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. S. Ristad and P. N. Yianilos, \"Learning string-edit distance,\" IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 5, pp. 522-32, May 1998.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "fairseq: A Fast, Extensible Toolkit for Sequence Modeling",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ott",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Ott et al., \"fairseq: A Fast, Extensible Toolkit for Sequence Modeling,\" Apr. 2019.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Statistical machine translation uses classical methods, such as hidden Markov models, n-gram language models, and sentence alignment 2 . SMT systems have weaknesses that prevent to reach better results. The statistical approaches can calculate only with relatively short contexts (three items in the input sequence maximum).",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Neural networks are sensitive to the amount of training data. Obtaining reasonable precision requires the sufficient size of the training set. Preparation of the data for training of the neural network is difficult, timely, and expensive. Our approach overcomes the problem of data sparsity by rule-based error model that utilizes any unannotated data in the target language and prepares an artificial training set. A sequence of edit operations describes a spelling error. Usually, the error model considers insertion, deletion, and substitution of characters. A statistical error model is estimated from training data that contain the original and the erroneous strings. An artificial error function randomly modifies some characters in the dataset and creates a distorted string. Example of the training set is in the Table 1. The training of the neural network uses the distorted string as input and the original string as the output. Training of the network estimates the reverse function and the network can guess the intended form of a distorted string. The error function can generate any amount of training data from a text in natural language.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "CER Learning Curve",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"content": "<table><tr><td>of the training data</td><td/></tr><tr><td>Distorted input</td><td>Correct input</td></tr><tr><td>faktom vshak o\u010ftava</td><td>faktom v\u0161ak ost\u00e1va</td></tr><tr><td>\u017ee stavy zamesnancov</td><td>\u017ee stavy zamestnancov</td></tr></table>",
"type_str": "table",
"text": "",
"html": null,
"num": null
},
"TABREF2": {
"content": "<table><tr><td>Set</td><td/><td>Samples</td><td>Words</td><td>Characters</td></tr><tr><td colspan=\"2\">train</td><td>12 000 000</td><td>36 000 000</td><td>206 711 896</td></tr><tr><td>test</td><td/><td>50 000</td><td>150 000</td><td>873 358</td></tr><tr><td colspan=\"4\">The error model uses the following rules and probabilities:</td></tr><tr><td>\u2022</td><td colspan=\"3\">Insertion of arbitrary character 0.02</td></tr><tr><td>\u2022</td><td colspan=\"2\">Deletion of arbitrary character 0.02</td><td/></tr><tr><td>\u2022</td><td colspan=\"3\">Replacement of arbitrary character 0.08</td></tr><tr><td>\u2022</td><td colspan=\"2\">Keeping the character 0.9</td><td/></tr><tr><td colspan=\"3\">A forward-backward algorithm by</td><td/></tr></table>",
"type_str": "table",
"text": "Experimental dataset size Ristad and Yanilos[18] can estimate parameters of the error model for a set of training examples, which is left for the further research. The ASC system uses Fairseq toolkit[19]. The table 3 summarizes the parameters of the neural network (named fconv_iwslt_de_en in Fairseq toolkit).",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table><tr><td>Dropout</td></tr></table>",
"type_str": "table",
"text": "Neural network architecture is a usual form of evaluation of spelling correction models. Its advantage is that the size of the target and source sequence does not have to be the same. The metric first aligns the sequences with the hypothesis and with the golden truth. The WER is defined as a ratio of the counts of the inserted, deleted, and replaced words: SER) is ratio of incorrect samples to all samples in the testing set. The first experiment measures CER, WER, SER of correcting randomly distorted testing text omitted from the training. TheTable 4displays performance of the system after selected iterations(1,5,10,15) of the training of the neural network. The first row (0 -no correction)",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table><tr><td>Iteration</td><td>CER</td><td>WER</td><td>SER</td></tr><tr><td>0 (no correctoin)</td><td>0.1096</td><td>0.5173</td><td>0.8463</td></tr><tr><td>1</td><td>0.0386</td><td>0.1447</td><td>0.3396</td></tr><tr><td>5</td><td>0.0307</td><td>0.1108</td><td>0.2677</td></tr><tr><td>10</td><td>0.0279</td><td>0.0998</td><td>0.2443</td></tr><tr><td>15</td><td>0.0273</td><td>0.0971</td><td>0.2387</td></tr></table>",
"type_str": "table",
"text": "Performance of the proposed system",
"html": null,
"num": null
},
"TABREF5": {
"content": "<table><tr><td>Iteration</td><td>CER</td><td>WER</td><td>SER</td></tr><tr><td>1</td><td>0.00342</td><td>0.01594</td><td>0.044</td></tr></table>",
"type_str": "table",
"text": "Distortion on the clean data",
"html": null,
"num": null
}
}
}
}