ACL-OCL / Base_JSON /prefixB /json /bea /2020.bea-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:10:43.844410Z"
},
"title": "GECToR -Grammatical Error Correction: Tag, Not Rewrite",
"authors": [
{
"first": "Kostiantyn",
"middle": [],
"last": "Omelianchuk",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Vitaliy",
"middle": [],
"last": "Atrasevych",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Artem",
"middle": [],
"last": "Chernodub",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Oleksandr",
"middle": [],
"last": "Skurzhanskyi",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present a simple and efficient GEC sequence tagger using a Transformer encoder. Our system is pre-trained on synthetic data and then fine-tuned in two stages: first on errorful corpora, and second on a combination of errorful and error-free parallel corpora. We design custom token-level transformations to map input tokens to target corrections. Our best single-model/ensemble GEC tagger achieves an F 0.5 of 65.3/66.5 on CoNLL-2014 (test) and F 0.5 of 72.4/73.6 on BEA-2019 (test). Its inference speed is up to 10 times as fast as a Transformer-based seq2seq GEC system. The code and trained models are publicly available 1 .",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present a simple and efficient GEC sequence tagger using a Transformer encoder. Our system is pre-trained on synthetic data and then fine-tuned in two stages: first on errorful corpora, and second on a combination of errorful and error-free parallel corpora. We design custom token-level transformations to map input tokens to target corrections. Our best single-model/ensemble GEC tagger achieves an F 0.5 of 65.3/66.5 on CoNLL-2014 (test) and F 0.5 of 72.4/73.6 on BEA-2019 (test). Its inference speed is up to 10 times as fast as a Transformer-based seq2seq GEC system. The code and trained models are publicly available 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural Machine Translation (NMT)-based approaches (Sennrich et al., 2016a) have become the preferred method for the task of Grammatical Error Correction (GEC) 2 . In this formulation, errorful sentences correspond to the source language, and error-free sentences correspond to the target language. Recently, Transformer-based (Vaswani et al., 2017) sequence-to-sequence (seq2seq) models have achieved state-of-the-art performance on standard GEC benchmarks (Bryant et al., 2019) . Now the focus of research has shifted more towards generating synthetic data for pretraining the Transformer-NMT-based GEC systems (Grundkiewicz et al., 2019; Kiyono et al., 2019) . NMTbased GEC systems suffer from several issues which make them inconvenient for real world deployment: (i) slow inference speed, (ii) demand for * Authors contributed equally to this work, names are given in an alphabetical order.",
"cite_spans": [
{
"start": 50,
"end": 74,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF19"
},
{
"start": 326,
"end": 348,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 457,
"end": 478,
"text": "(Bryant et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 612,
"end": 639,
"text": "(Grundkiewicz et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 640,
"end": 660,
"text": "Kiyono et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 https://github.com/grammarly/gector 2 http://nlpprogress.com/english/ grammatical_error_correction.html (Accessed 1 April 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "large amounts of training data and (iii) interpretability and explainability; they require additional functionality to explain corrections, e.g., grammatical error type classification (Bryant et al., 2017) .",
"cite_spans": [
{
"start": 184,
"end": 205,
"text": "(Bryant et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we deal with the aforementioned issues by simplifying the task from sequence generation to sequence tagging. Our GEC sequence tagging system consists of three training stages: pretraining on synthetic data, fine-tuning on an errorful parallel corpus, and finally, fine-tuning on a combination of errorful and error-free parallel corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Related work. LaserTagger (Malmi et al., 2019) combines a BERT encoder with an autoregressive Transformer decoder to predict three main edit operations: keeping a token, deleting a token, and adding a phrase before a token. In contrast, in our system, the decoder is a softmax layer. PIE (Awasthi et al., 2019) is an iterative sequence tagging GEC system that predicts token-level edit operations. While their approach is the most similar to ours, our work differs from theirs as described in our contributions below: 1. We develop custom g-transformations: token-level edits to perform (g)rammatical error corrections. Predicting g-transformations instead of regular tokens improves the generalization of our GEC sequence tagging system.",
"cite_spans": [
{
"start": 26,
"end": 46,
"text": "(Malmi et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 288,
"end": 310,
"text": "(Awasthi et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We decompose the fine-tuning stage into two stages: fine-tuning on errorful-only sentences and further fine-tuning on a small, high-quality dataset containing both errorful and error-free sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. We achieve superior performance by incorporating a pre-trained Transformer encoder in our GEC sequence tagging system. In our experiments, encoders from XLNet and RoBERTa outperform three other cutting-edge Transformer encoders (ALBERT, BERT, and GPT-2). 2 Datasets Table 1 describes the finer details of datasets used for different training stages. Synthetic data. For pretraining stage I, we use 9M parallel sentences with synthetically generated grammatical errors (Awasthi et al., 2019) 3 .",
"cite_spans": [
{
"start": 471,
"end": 495,
"text": "(Awasthi et al., 2019) 3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 269,
"end": 276,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Training data. We use the following datasets for fine-tuning stages II and III: National University of Singapore Corpus of Learner English (NU-CLE) 4 (Dahlmeier et al., 2013) , Lang-8 Corpus of Learner English (Lang-8) 5 (Tajiri et al., 2012) , FCE dataset 6 (Yannakoudakis et al., 2011) , the publicly available part of the Cambridge Learner Corpus (Nicholls, 2003) and Write & Improve + LOC-NESS Corpus (Bryant et al., 2019) 7 .",
"cite_spans": [
{
"start": 150,
"end": 174,
"text": "(Dahlmeier et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 221,
"end": 242,
"text": "(Tajiri et al., 2012)",
"ref_id": "BIBREF21"
},
{
"start": 259,
"end": 287,
"text": "(Yannakoudakis et al., 2011)",
"ref_id": "BIBREF24"
},
{
"start": 350,
"end": 366,
"text": "(Nicholls, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 405,
"end": 428,
"text": "(Bryant et al., 2019) 7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Evaluation data. We report results on CoNLL-2014 test set (Ng et al., 2014) evaluated by official M 2 scorer (Dahlmeier and Ng, 2012) , and on BEA-2019 dev and test sets evaluated by ERRANT (Bryant et al., 2017) .",
"cite_spans": [
{
"start": 109,
"end": 133,
"text": "(Dahlmeier and Ng, 2012)",
"ref_id": "BIBREF3"
},
{
"start": 190,
"end": 211,
"text": "(Bryant et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We developed custom token-level transformations T (x i ) to recover the target text by applying them to the source tokens (x 1 . . . x N ). Transformations increase the coverage of grammatical error corrections for limited output vocabulary size for the most common grammatical errors, such as Spelling, Noun Number, Subject-Verb Agreement and Verb Form (Yuan, 2017, p. 28) .",
"cite_spans": [
{
"start": 354,
"end": 373,
"text": "(Yuan, 2017, p. 28)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Token-level transformations",
"sec_num": "3"
},
{
"text": "The edit space which corresponds to our default tag vocabulary size = 5000 consists of 4971 basic transformations (token-independent KEEP, DELETE and 1167 token-dependent APPEND, 3802 REPLACE) and 29 token-independent gtransformations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token-level transformations",
"sec_num": "3"
},
{
"text": "Basic transformations perform the most common token-level edit operations, such as: keep the current token unchanged (tag $KEEP), delete current token (tag $DELETE), append new token t 1 next to the current token x i (tag $APPEND t 1 ) or replace the current token x i with another token t 2 (tag $REPLACE t 2 ). g-transformations perform task-specific operations such as: change the case of the current token (CASE tags), merge the current token and the next token into a single one (MERGE tags) and split the current token into two new tokens (SPLIT tags). Moreover, tags from NOUN NUMBER and VERB FORM transformations encode grammatical properties for tokens. For instance, these transformations include conversion of singular nouns to plurals and vice versa or even change the form of regular/irregular verbs to express a different number or tense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token-level transformations",
"sec_num": "3"
},
{
"text": "To obtain the transformation suffix for the VERB FORM tag, we use the verb conjugation dictionary 8 . For convenience, it was converted into the following format: token 0 token 1 : tag 0 tag 1 (e.g., go goes : V B V BZ). This means that there is a transition from word 0 and word 1 to the respective tags. The transition is unidirectional, so if there exists a reverse transition, it is presented separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token-level transformations",
"sec_num": "3"
},
{
"text": "The experimental comparison of covering capabilities for our token-level transformations is in Table 2 . All transformation types with examples are listed in Appendix, Table 9 .",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 168,
"end": 175,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Token-level transformations",
"sec_num": "3"
},
{
"text": "Preprocessing. To approach the task as a sequence tagging problem we need to convert each target sentence from training/evaluation sets into a sequence of tags where each tag is mapped to a single source token. Below is a brief description of our 3-step preprocessing algorithm for color-coded sentence pair from Table 3: Step 1). Map each token from source sentence to subsequence of tokens from target sentence. [ For this purpose, we first detect the minimal spans of tokens which define differences between source tokens (x 1 . . . x N ) and target tokens (y 1 . . . y M ). Thus, such a span is a pair of selected source tokens and corresponding target tokens. We can't use these span-based alignments, because we need to get tags on the token level. So then, for each source token x i , 1 \u2264 i \u2264 N we search for best-fitting subsequence \u03a5 i = (y j 1 . . . y j 2 ), 1 \u2264 j 1 \u2264 j 2 \u2264 M of target tokens by minimizing the modified Levenshtein distance (which takes into account that successful g-transformation is equal to zero distance).",
"cite_spans": [
{
"start": 414,
"end": 415,
"text": "[",
"ref_id": null
}
],
"ref_spans": [
{
"start": 313,
"end": 321,
"text": "Table 3:",
"ref_id": null
}
],
"eq_spans": [],
"section": "Token-level transformations",
"sec_num": "3"
},
{
"text": "Step 2). For each mapping in the list, find tokenlevel transformations which convert source token to the target subsequence: Step 3). Leave only one transformation for each source token:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token-level transformations",
"sec_num": "3"
},
{
"text": "A \u21d4 $KEEP, ten \u21d4 $MERGE HYPHEN, years \u21d4 $NOUN NUMBER SINGULAR, old \u21d4 $KEEP, go \u21d4 $VERB FORM VB VBZ, school \u21d4 $APPEND {.}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token-level transformations",
"sec_num": "3"
},
{
"text": "The iterative sequence tagging approach adds a constraint because we can use only a single tag for each token. In case of multiple transformations we take the first transformation that is not a $KEEP tag. For more details, please, see the preprocessing script in our repository 9 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token-level transformations",
"sec_num": "3"
},
{
"text": "Our GEC sequence tagging model is an encoder made up of pretrained BERT-like transformer 9 https://github.com/grammarly/gector Iteration # Sentence's evolution # corr. Orig. sent A ten years old boy go school -Iteration 1 A ten-years old boy goes school 2 Iteration 2 A ten-year-old boy goes to school 5 Iteration 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging model architecture",
"sec_num": "4"
},
{
"text": "A ten-year-old boy goes to school. 6 Table 3 : Example of iterative correction process where GEC tagging system is sequentially applied at each iteration. Cumulative number of corrections is given for each iteration. Corrections are in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 44,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tagging model architecture",
"sec_num": "4"
},
{
"text": "stacked with two linear layers with softmax layers on the top. We always use cased pretrained transformers in their Base configurations. Tokenization depends on the particular transformer's design: BPE (Sennrich et al., 2016b) is used in RoBERTa, WordPiece (Schuster and Nakajima, 2012) in BERT and SentencePiece (Kudo and Richardson, 2018) in XLNet. To process the information at the tokenlevel, we take the first subword per token from the encoder's representation, which is then forwarded to subsequent linear layers, which are responsible for error detection and error tagging, respectively.",
"cite_spans": [
{
"start": 202,
"end": 226,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF20"
},
{
"start": 257,
"end": 286,
"text": "(Schuster and Nakajima, 2012)",
"ref_id": "BIBREF18"
},
{
"start": 313,
"end": 340,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging model architecture",
"sec_num": "4"
},
{
"text": "To correct the text, for each input token x i , 1 \u2264 i \u2264 N from the source sequence (x 1 . . . x N ), we predict the tag-encoded token-level transformation T (x i ) described in Section 3. These predicted tagencoded transformations are then applied to the sentence to get the modified sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative sequence tagging approach",
"sec_num": "5"
},
{
"text": "Since some corrections in a sentence may depend on others, applying GEC sequence tagger only once may not be enough to fully correct the sentence. Therefore, we use the iterative correction approach from (Awasthi et al., 2019) : we use the GEC sequence tagger to tag the now modified sequence, and apply the corresponding transformations on the new tags, which changes the sentence further (see an example in Table 3 ). Usually, the number of corrections decreases with each successive iteration, and most of the corrections are done during the first two iterations (Table 4) . Limiting the number of iterations speeds up the overall pipeline while trading off qualitative performance. ",
"cite_spans": [
{
"start": 204,
"end": 226,
"text": "(Awasthi et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 409,
"end": 416,
"text": "Table 3",
"ref_id": null
},
{
"start": 566,
"end": 575,
"text": "(Table 4)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Iterative sequence tagging approach",
"sec_num": "5"
},
{
"text": "Training stages. We have 3 training stages (details of data usage are in Table 1): I Pre-training on synthetic errorful sentences as in (Awasthi et al., 2019) .",
"cite_spans": [
{
"start": 136,
"end": 158,
"text": "(Awasthi et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 73,
"end": 82,
"text": "Table 1):",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "II Fine-tuning on errorful-only sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "III Fine-tuning on subset of errorful and errorfree sentences as in (Kiyono et al., 2019) .",
"cite_spans": [
{
"start": 68,
"end": 89,
"text": "(Kiyono et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "We found that having two fine-tuning stages with and without error-free sentences is crucial for performance (Table 5 ). All our models were trained by Adam optimizer (Kingma and Ba, 2015) with default hyperparameters. Early stopping was used; stopping criteria was 3 epochs of 10K updates each without improvement. We set batch size=256 for pre-training stage I (20 epochs) and batch size=128 for fine-tuning stages II and III (2-3 epochs each). We also observed that freezing the encoder's weights for the first 2 epochs on training stages I-II and using a batch size greater than 64 improves the convergence and leads to better GEC performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 117,
"text": "(Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Encoders from pretrained transformers. We fine-tuned BERT (Devlin et al., 2019) , RoBERTa , GPT-2 (Radford et al., 2019) , XLNet (Yang et al., 2019) , and ALBERT (Lan et al., 2019) with the same hyperparameters setup. We also added LSTM with randomly initialized embeddings (dim = 300) as a baseline. As follows from Table 6 , encoders from fine-tuned Transformers significantly outperform LSTMs. BERT, RoBERTa and XLNet encoders perform better than GPT-2 and ALBERT, so we used them only in our next experiments. All models were trained out-of-the-box 10 which seems to not work well for GPT-2. We hypothesize that encoders from Transformers which were pretrained as a part of the entire encoder-decoder pipeline are less useful for GECToR. Tweaking the inference. We forced the model to perform more precise corrections by introducing two inference hyperparameters (see Appendix, Table 11) , hyperparameter values were found by random search on BEA-dev.",
"cite_spans": [
{
"start": 58,
"end": 79,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 92,
"end": 120,
"text": "GPT-2 (Radford et al., 2019)",
"ref_id": null
},
{
"start": 129,
"end": 148,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 162,
"end": 180,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 317,
"end": 324,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 882,
"end": 891,
"text": "Table 11)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "First, we added a permanent positive confidence bias to the probability of $KEEP tag which is responsible for not changing the source token. Second, we added a sentence-level minimum error probability threshold for the output of the error detection layer. This increased precision by trading off recall and achieved better F 0.5 scores (Table 5) .",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 345,
"text": "(Table 5)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Finally, our best single-model, GECToR (XL-Net) achieves F 0.5 = 65.3 on CoNLL-2014 (test) and F 0.5 = 72.4 on BEA-2019 (test). Best ensemble model, GECToR (BERT + RoBERTa + XLNet) where we simply average output probabilities from 3 single models achieves F 0.5 = 66.5 on CoNLL-2014 (test) and F 0.5 = 73.6 on BEA-2019 (test), correspondingly (Table 7) .",
"cite_spans": [],
"ref_spans": [
{
"start": 343,
"end": 352,
"text": "(Table 7)",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Speed comparison. We measured the model's average inference time on NVIDIA Tesla V100 on batch size 128. For sequence tagging we don't need to predict corrections one-by-one as in autoregressive transformer decoders, so inference is naturally parallelizable and therefore runs many times faster. Our sequence tagger's inference speed is up to 10 times as fast as the state-of-the-art Transformer from Zhao et al. (2019) , beam size=12 (Table 8) .",
"cite_spans": [
{
"start": 401,
"end": 419,
"text": "Zhao et al. (2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 435,
"end": 444,
"text": "(Table 8)",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Ens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GEC system",
"sec_num": null
},
{
"text": "CoNLL-2014 (test) BEA-2019 (test) P R F0.5 P R F0.5 Zhao et al. (2019) 67.7 40.6 59.8 --- Awasthi et al. (2019) 66.1 43.0 59.7 --- Kiyono et al. (2019) 67.9 44.1 61.3 65.5 59.4 64.2 Zhao et al. (2019) 74 ",
"cite_spans": [
{
"start": 52,
"end": 70,
"text": "Zhao et al. (2019)",
"ref_id": "BIBREF26"
},
{
"start": 90,
"end": 111,
"text": "Awasthi et al. (2019)",
"ref_id": "BIBREF0"
},
{
"start": 131,
"end": 151,
"text": "Kiyono et al. (2019)",
"ref_id": "BIBREF9"
},
{
"start": 182,
"end": 200,
"text": "Zhao et al. (2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GEC system",
"sec_num": null
},
{
"text": "We show that a faster, simpler, and more efficient GEC system can be developed using a sequence tagging approach, an encoder from a pretrained Transformer, custom transformations and 3-stage training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Our best single-model/ensemble GEC tagger achieves an F 0.5 of 65.3/66.5 on CoNLL-2014 (test) and F 0.5 of 72.4/73.6 on BEA-2019 (test). We achieve state-of-the-art results for the GEC task with an inference speed up to 10 times as fast as Transformer-based seq2seq systems. Table 9 : List of token-level transformations (section 3). We denote a tag which defines a token-level transformation as concatenation of two parts: a core transformation and a transformation suffix. 10: Performance of GECToR (RoBERTa) after each training stage and inference tweaks. Results are given in addition to results for our best single model, GECToR (XLNet) which are given in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 282,
"text": "Table 9",
"ref_id": null
},
{
"start": 661,
"end": 668,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Confidence bias Minimum error probability GECToR (BERT) 0.10 0.41 GECToR (RoBERTa) 0.20 0.50 GECToR (XLNet) 0.35 0.66 GECToR (RoBERTa + XLNet) 0.24 0.45 GECToR (BERT + RoBERTa + XLNet) 0.16 0.40 Table 11 : Inference tweaking values which were found by random search on BEA-dev.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 203,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "System name",
"sec_num": null
},
{
"text": "https://github.com/awasthiabhijeet/ PIE/tree/master/errorify 4 https://www.comp.nus.edu.sg/\u02dcnlp/ corpora.html 5 https://sites.google.com/site/ naistlang8corpora6 https://ilexir.co.uk/datasets/index. html 7 https://www.cl.cam.ac.uk/research/nl/ bea2019st/data/wi+locness_v2.1.bea19.tar. gz",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/gutfeeling/word_ forms/blob/master/word_forms/en-verbs. txt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/transformers/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported by Grammarly. We thank our colleagues Vipul Raheja, Oleksiy Syvokon, Andrey Gryshchuk and our ex-colleague Maria Nadejde who provided insight and expertise that greatly helped to make this paper better. We would also like to show our gratitude to Abhijeet Awasthi and Roman Grundkiewicz for their support in providing data and answering related questions. We also thank 3 anonymous reviewers for their contribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Parallel iterative edit models for local sequence transduction",
"authors": [
{
"first": "Abhijeet",
"middle": [],
"last": "Awasthi",
"suffix": ""
},
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
},
{
"first": "Rasna",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Sabyasachi",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Vihari",
"middle": [],
"last": "Piratla",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4260--4270",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1435"
]
},
"num": null,
"urls": [],
"raw_text": "Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Parallel iterative edit models for local sequence transduction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4260- 4270, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The BEA-2019 shared task on grammatical error correction",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "\u00d8istein",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "52--75",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4406"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Bryant, Mariano Felice, \u00d8istein E. An- dersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Pro- ceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75, Florence, Italy. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic annotation and evaluation of error types for grammatical error correction",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "793--805",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1074"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 793-805, Vancouver, Canada. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Better evaluation for grammatical error correction",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Dahlmeier",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "568--572",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568-572. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Building a large annotated corpus of learner english: The nus corpus of learner english",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Dahlmeier",
"suffix": ""
},
{
"first": "Siew Mei",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the eighth workshop on innovative use of NLP for building educational applications",
"volume": "",
"issue": "",
"pages": "22--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner english: The nus corpus of learner english. In Pro- ceedings of the eighth workshop on innovative use of NLP for building educational applications, pages 22-31.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural grammatical error correction systems with unsupervised pre-training on synthetic data",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "252--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Ed- ucational Applications, pages 252-263.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning to combine grammatical error corrections",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Kantor",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "Leshem",
"middle": [],
"last": "Choshen",
"suffix": ""
},
{
"first": "Edo",
"middle": [],
"last": "Cohen-Karlik",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "Assaf",
"middle": [],
"last": "Toledo",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Menczel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "139--148",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4414"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Kantor, Yoav Katz, Leshem Choshen, Edo Cohen- Karlik, Naftali Liberman, Assaf Toledo, Amir Menczel, and Noam Slonim. 2019. Learning to com- bine grammatical error corrections. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 139-148, Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adam (2014), a method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR), arXiv preprint arXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam (2014), a method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), arXiv preprint arXiv, volume 1412.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An empirical study of incorporating pseudo data into grammatical error correction",
"authors": [
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Mita",
"suffix": ""
},
{
"first": "Tomoya",
"middle": [],
"last": "Mizumoto",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1236--1242",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1119"
]
},
"num": null,
"urls": [],
"raw_text": "Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizu- moto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical er- ror correction. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1236-1242, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11942"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Encode, tag, realize: High-precision text editing",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Malmi",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Daniil",
"middle": [],
"last": "Mirylenka",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5054--5065",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1510"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5054-5065, Hong",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "China",
"middle": [],
"last": "Kong",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The conll-2014 shared task on grammatical error correction",
"authors": [
{
"first": "",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Mei",
"middle": [],
"last": "Siew",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"Hendy"
],
"last": "Hadiwinoto",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Susanto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christo- pher Bryant. 2014. The conll-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-14.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The cambridge learner corpus: Error coding and analysis for lexicography and elt",
"authors": [
{
"first": "Diane",
"middle": [],
"last": "Nicholls",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Corpus Linguistics",
"volume": "16",
"issue": "",
"pages": "572--581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diane Nicholls. 2003. The cambridge learner corpus: Error coding and analysis for lexicography and elt. In Proceedings of the Corpus Linguistics 2003 con- ference, volume 16, pages 572-581.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Japanese and korean voice search",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Kaisuke",
"middle": [],
"last": "Nakajima",
"suffix": ""
}
],
"year": 2012,
"venue": "2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5149--5152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In 2012 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149-5152. IEEE.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Edinburgh neural machine translation systems for WMT 16",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "371--376",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2323"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine translation sys- tems for WMT 16. In Proceedings of the First Con- ference on Machine Translation: Volume 2, Shared Task Papers, pages 371-376, Berlin, Germany. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Tense and aspect error correction for esl learners using global context",
"authors": [
{
"first": "Toshikazu",
"middle": [],
"last": "Tajiri",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "198--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshikazu Tajiri, Mamoru Komachi, and Yuji Mat- sumoto. 2012. Tense and aspect error correction for esl learners using global context. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 198-202. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5754--5764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5754-5764.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A new dataset and method for automatically grading esol texts",
"authors": [
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Medlock",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "180--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading esol texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 180-189. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Grammatical error correction in non-native english",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Yuan. 2017. Grammatical error correction in non-native english. Technical report, University of Cambridge, Computer Laboratory.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Ruoyu",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Jingming",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "156--165",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1014"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical error correction via pre-training a copy-augmented archi- tecture with unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 156-165, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "[A \u2192 A]: $KEEP, [ten \u2192 ten, -]: $KEEP, $MERGE HYPHEN, [years \u2192 year, -]: $NOUN NUMBER SINGULAR, $MERGE HYPHEN], [old \u2192 old]: $KEEP, [go \u2192 goes, to]: $VERB FORM VB VBZ, $AP-PEND to, [school \u2192 school, .]: $KEEP, $AP-PEND {.}].",
"uris": null
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Training datasets. Training stage I is pretrain-</td></tr><tr><td>ing on synthetic data. Training stages II and III are for</td></tr><tr><td>fine-tuning.</td></tr></table>",
"text": "",
"html": null
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Tag</td><td colspan=\"2\">Transformations</td></tr><tr><td>vocab. size</td><td colspan=\"2\">Basic transf. All transf.</td></tr><tr><td>100</td><td>60.4%</td><td>79.7%</td></tr><tr><td>1000</td><td>76.4%</td><td>92.9%</td></tr><tr><td>5000</td><td>89.5%</td><td>98.1%</td></tr><tr><td>10000</td><td>93.5%</td><td>100.0%</td></tr></table>",
"text": "A \u2192 A], [ten \u2192 ten, -], [years \u2192 year, -], [old \u2192 old], [go \u2192 goes, to], [school \u2192 school, .].",
"html": null
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Share of covered grammatical errors in CoNLL-2014 for basic transformations only (KEEP, DELETE, APPEND, REPLACE) and for all transformations w.r.t. tag vocabulary's size. In our work, we set the default tag vocabulary size = 5000 as a heuristical compromise between coverage and model size.",
"html": null
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Training</td><td colspan=\"3\">CoNLL-2014 (test)</td><td colspan=\"3\">BEA-2019 (dev)</td></tr><tr><td>stage #</td><td>P</td><td>R</td><td>F0.5</td><td>P</td><td>R</td><td>F0.5</td></tr><tr><td>Stage I.</td><td colspan=\"6\">55.4 35.9 49.9 37.0 23.6 33.2</td></tr><tr><td>Stage II.</td><td colspan=\"6\">64.4 46.3 59.7 46.4 37.9 44.4</td></tr><tr><td>Stage III.</td><td colspan=\"6\">66.7 49.9 62.5 52.6 43.0 50.3</td></tr><tr><td colspan=\"7\">Inf. tweaks 77.5 40.2 65.3 66.0 33.8 55.5</td></tr></table>",
"text": "Cumulative number of corrections and corresponding scores on CoNLL-2014 (test) w.r.t. number of iterations for our best single model.",
"html": null
},
"TABREF6": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Performance of GECToR (XLNet) after each training stage and inference tweaks.",
"html": null
},
"TABREF8": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Varying encoders from pretrained Transform-</td></tr><tr><td>ers in our sequence labeling system. Training was done</td></tr><tr><td>on data from training stage II only.</td></tr></table>",
"text": "",
"html": null
},
"TABREF10": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>GEC system</td><td>Time (sec)</td></tr><tr><td>Transformer-NMT, beam size = 12</td><td>4.35</td></tr><tr><td>Transformer-NMT, beam size = 4</td><td>1.25</td></tr><tr><td>Transformer-NMT, beam size = 1</td><td>0.71</td></tr><tr><td>GECToR (XLNet), 5 iterations</td><td>0.40</td></tr><tr><td>GECToR (XLNet), 1 iteration</td><td>0.20</td></tr></table>",
"text": "Comparison of single models and ensembles. The M 2 score for CoNLL-2014 (test) and ERRANT for the BEA-2019 (test) are reported. In ensembles we simply average output probabilities from single models.",
"html": null
},
"TABREF11": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Inference time for NVIDIA Tesla V100 on CoNLL-2014 (test), single model, batch size=128.",
"html": null
},
"TABREF12": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Training</td><td colspan=\"3\">CoNLL-2014 (test)</td><td colspan=\"3\">BEA-2019 (dev)</td></tr><tr><td>stage #</td><td>P</td><td>R</td><td>F0.5</td><td>P</td><td>R</td><td>F0.5</td></tr><tr><td>Stage I.</td><td colspan=\"6\">57.8 33.0 50.2 40.8 22.1 34.9</td></tr><tr><td>Stage II.</td><td colspan=\"6\">68.1 42.6 60.8 51.6 33.8 46.7</td></tr><tr><td>Stage III.</td><td>68.8</td><td/><td/><td/><td/><td/></tr></table>",
"text": "47.1 63.0 54.2 41.0 50.9 Inf. tweaks 73.9 41.5 64.0 62.3 35.1 54.0",
"html": null
},
"TABREF13": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "",
"html": null
}
}
}
}