ACL-OCL / Base_JSON /prefixB /json /bea /2021.bea-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:09:47.388343Z"
},
"title": "Assessing Grammatical Correctness in Language Learning",
"authors": [
{
"first": "Anisia",
"middle": [],
"last": "Katinskaia",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present experiments on assessing the grammatical correctness of learner answers in the Revita language-learning platform. 1 In particular, we explore the problem of detecting alternative-correct answers: when more than one inflected form of a lemma fits syntactically and semantically in a given context. This problem was formulated as Multiple Admissibility (MA) in (Katinskaia et al., 2019). We approach the problem from the perspective of grammatical error detection (GED), since we hypothesize that models for detecting grammatical mistakes can assess the correctness of potential alternative answers in the language-learning setting. Due to the paucity of training data, we explore the ability of pre-trained BERT to detect grammatical errors and then fine-tune it using synthetic training data. In this work, we focus on errors in inflection. Our experiments A. show that pre-trained BERT performs worse at detecting grammatical irregularities for Russian than for English; B. show that fine-tuned BERT yields promising results on assessing correctness in grammatical exercises; and C. establish new GED benchmarks for Russian. To further investigate its performance, we compare fine-tuned BERT with one a state-of-theart model for GED (Bell et al., 2019) on our dataset, and on RULEC-GEC (Rozovskaya and Roth, 2019). We release our manually annotated learner dataset, used for testing, for general use.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We present experiments on assessing the grammatical correctness of learner answers in the Revita language-learning platform. 1 In particular, we explore the problem of detecting alternative-correct answers: when more than one inflected form of a lemma fits syntactically and semantically in a given context. This problem was formulated as Multiple Admissibility (MA) in (Katinskaia et al., 2019). We approach the problem from the perspective of grammatical error detection (GED), since we hypothesize that models for detecting grammatical mistakes can assess the correctness of potential alternative answers in the language-learning setting. Due to the paucity of training data, we explore the ability of pre-trained BERT to detect grammatical errors and then fine-tune it using synthetic training data. In this work, we focus on errors in inflection. Our experiments A. show that pre-trained BERT performs worse at detecting grammatical irregularities for Russian than for English; B. show that fine-tuned BERT yields promising results on assessing correctness in grammatical exercises; and C. establish new GED benchmarks for Russian. To further investigate its performance, we compare fine-tuned BERT with one a state-of-theart model for GED (Bell et al., 2019) on our dataset, and on RULEC-GEC (Rozovskaya and Roth, 2019). We release our manually annotated learner dataset, used for testing, for general use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many intelligent tutoring systems (ITS) and computer-aided language learning systems (CALL) generate exercises and try to assess the learner's answers automatically. Providing feedback to the learner is difficult, due to the critical requirement of very high precision-providing incorrect feedback is much more harmful than no feedback at 1 revita.cs.helsinki.fi all. For this reason, most existing systems have prefabricated sets of exercises, with possible expected answers and prepared feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Revita is an online L2 learning system for learners beyond the beginner level, which can be used in the classroom and for self-study (Katinskaia et al., 2017; . It covers several languages, most of which are highly inflectional, with rich morphology. In contrast to the pre-fabricated approach, Revita allows the learner to upload arbitrary texts to be used as learning content, and automatically creates exercises based on the chosen content. At practice time, Revita presents the text one paragraph at a time with some words hidden and used as fill-inthe-blank (cloze) exercises. For each hidden word, Revita provides a hint-the base form (lemma) of the word. The learner should insert the inflected form of the lemma, given the context.",
"cite_spans": [
{
"start": 133,
"end": 158,
"text": "(Katinskaia et al., 2017;",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Continuous assessment of the user's answers is also performed automatically (Hou et al., 2019) . Revita checks the learner's answer by comparing it with the expected answer-the one found in the original text. The problem arises when, for some exercise, besides the expected answer, another answer is also valid in the context. As a result, Revita may provide undesirable feedback by flagging answers that are not expected, but nonetheless correct, as \"errors\"-this can strongly mislead, confuse and discourage the learner. For example, both highlighted answers in the example below can be considered correct, but Revita expects the learner to use only the past tense form \"\u0441\u0434\u0430\u0432\u0430\u043b\" (\"took\"): \"\u041c\u043d\u0435 \u043f\u0440\u0438\u0441\u043d\u0438\u043b\u043e\u0441\u044c, \u043a\u0430\u043a \u044f \u0441\u0434\u0430\u0432\u0430\u043b \u044d\u043a\u0437\u0430\u043c\u0435\u043d\u044b.\" (\"I saw a dream how I took exams.\") \"\u041c\u043d\u0435 \u043f\u0440\u0438\u0441\u043d\u0438\u043b\u043e\u0441\u044c, \u043a\u0430\u043a \u044f \u0441\u0434\u0430\u044e \u044d\u043a\u0437\u0430\u043c\u0435\u043d\u044b.\" (\"I saw a dream how I take exams.\")",
"cite_spans": [
{
"start": 76,
"end": 94,
"text": "(Hou et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hence, detecting alternative-correct answers is essential in the learning context. We manually checked a large set of answers, which were marked Learner level Advanced Others Gram. error 649 (62.2%) 4777 (72.8%) Alternative corr. 395 (37.8%) 1024 (15.6%) Table 1 : Percentage of answers with real grammatical errors and alternative-correct answers for advanced and other learners among all answers which were automatically marked by Revita as incorrect.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "by Revita as \"erroneous\" and discovered that the percentage of alternative-correct answers for advanced learners is more than double the percentage for other learners (see Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address this problem, we build a model, which takes a paragraph with learner answers and decides whether they are grammatically correct. If the model is not certain about a user's answer, we can fall back on the \"default\" method-comparing to the expected answer. For evaluation, we created a dataset of paragraphs containing answers given by real learners, manually annotated for acceptability in their context. 2 Additionally, we do not focus on semantics, due to the current setup of the exercisesif a learner inserts an answer with a lemma that is different from the given hint, that is always considered erroneous. Learners may give such answers, but they are easily identified and are not considered and are not annotated in this set of experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. In section 2, we review prior work on GED, with a focus on limited training data. In section 3, we describe the learner corpora collected by Revita and used for evaluation and propose a novel method for generating data with simulated grammatical errors. In section 4, we first experiment with a pre-trained BERT as a masked language model (MLM). We fine-tune the pre-trained BERT on synthetic data and measure its ability to assess grammatical correctness on the learner data. In section 5, we discuss the results of the experiments. In section 6, we summarize our contribution and discuss future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Early experiments with GED utilized rules (Foster and Vogel, 2004) and supervised learning from error-annotated corpora (Chodorow et al., 2007) . Much work focused on detection of particular types of errors, e.g., verb forms (Lee and Seneff, 2008) .",
"cite_spans": [
{
"start": 42,
"end": 66,
"text": "(Foster and Vogel, 2004)",
"ref_id": "BIBREF12"
},
{
"start": 120,
"end": 143,
"text": "(Chodorow et al., 2007)",
"ref_id": "BIBREF7"
},
{
"start": 225,
"end": 247,
"text": "(Lee and Seneff, 2008)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Later work is mostly independent of the type of errors, and explores various neural architectures. Rei and Yannakoudakis (2016) first approach error detection by bi-LSTM models, which achieved strong results on in-domain data. added character-level embeddings to capture morphological similarities between words. Rei (2017) experiment with using a secondary language modeling (LM) objective. Rei and Yannakoudakis (2017) perform experiments with adding multiple auxiliary tasks for error detection. The best result was achieved by combining the main error detection task with predicting error types, POS tags, and types of grammatical relations. In subsequent experiments, the architecture was modified for jointly learning to label tokens and sentences (Rei and S\u00f8gaard, 2019) . Bell et al. (2019) extended the above model by incorporating contextualized word embeddings produced by BERT, ELMO, and Flair (Peters et al., 2017; Devlin et al., 2018; Akbik et al., 2018) . BERT embeddings produced the best performance across all test sets.",
"cite_spans": [
{
"start": 99,
"end": 127,
"text": "Rei and Yannakoudakis (2016)",
"ref_id": "BIBREF50"
},
{
"start": 313,
"end": 323,
"text": "Rei (2017)",
"ref_id": "BIBREF46"
},
{
"start": 392,
"end": 420,
"text": "Rei and Yannakoudakis (2017)",
"ref_id": "BIBREF51"
},
{
"start": 754,
"end": 777,
"text": "(Rei and S\u00f8gaard, 2019)",
"ref_id": "BIBREF49"
},
{
"start": 780,
"end": 798,
"text": "Bell et al. (2019)",
"ref_id": "BIBREF4"
},
{
"start": 884,
"end": 927,
"text": "BERT, ELMO, and Flair (Peters et al., 2017;",
"ref_id": null
},
{
"start": 928,
"end": 948,
"text": "Devlin et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 949,
"end": 968,
"text": "Akbik et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Previous work on GED mostly uses bi-LSTM as a classification model, combined with various approaches for augmenting the training data (Liu and Liu, 2017; Kasewa et al., 2018) , or creating new, grammatically-specific word embeddings (Kaneko et al., 2017) . More recent work utilizes transformer models (Kaneko and Komachi, 2019; Kaneko et al., 2020; Li et al., 2020; Chen et al., 2020) .",
"cite_spans": [
{
"start": 134,
"end": 153,
"text": "(Liu and Liu, 2017;",
"ref_id": "BIBREF39"
},
{
"start": 154,
"end": 174,
"text": "Kasewa et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 233,
"end": 254,
"text": "(Kaneko et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 302,
"end": 328,
"text": "(Kaneko and Komachi, 2019;",
"ref_id": "BIBREF21"
},
{
"start": 329,
"end": 349,
"text": "Kaneko et al., 2020;",
"ref_id": "BIBREF22"
},
{
"start": 350,
"end": 366,
"text": "Li et al., 2020;",
"ref_id": "BIBREF36"
},
{
"start": 367,
"end": 385,
"text": "Chen et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several works on GEC focus on lower-resource languages, including Russian, using the RULEC-GEC dataset for training or fine-tuning (Rozovskaya and Roth, 2019; N\u00e1plava and Straka, 2019; Katsumata and Komachi, 2020) . N\u00e1plava and Straka (2019) outperformed results of Rozovskaya and Roth (2019) by over 100% on F 0.5 , but still showed poor performance compared with other languages in the experiment. GEC for Russian is demonstrated to be the most challenging task, which is explained in part by the small size of RULEC-GEC.",
"cite_spans": [
{
"start": 131,
"end": 158,
"text": "(Rozovskaya and Roth, 2019;",
"ref_id": "BIBREF53"
},
{
"start": 159,
"end": 184,
"text": "N\u00e1plava and Straka, 2019;",
"ref_id": "BIBREF41"
},
{
"start": 185,
"end": 213,
"text": "Katsumata and Komachi, 2020)",
"ref_id": "BIBREF31"
},
{
"start": 216,
"end": 241,
"text": "N\u00e1plava and Straka (2019)",
"ref_id": "BIBREF41"
},
{
"start": 266,
"end": 292,
"text": "Rozovskaya and Roth (2019)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The problem of scarce training data for GED can be approached by using pre-trained language models. Linzen et al. (2016) explored the ability of a LSTM model trained without grammatical supervision to detect grammatical errors by performing an unsupervised cloze test. The authors use a dataset of sentence pairs: an error-free original and an erroneous one. The erroneous sentence can be built manually or automatically, and differs from the original by only one word-the target position. They feed complete sentences into the model, collect all predictions for the target position, and compare the scores assigned to the original correct word and the incorrect one, e.g., write vs. writes. Errors should have a lower probability than correct forms. The LM performs much worse than supervised models, especially in case of long syntactic dependencies (Jozefowicz et al., 2016; Marvin and Linzen, 2018; Gulordava et al., 2018) . This work was done on Italian, Hebrew, and Russian. Goldberg (2019) adapted the described evaluation methods and applied them to pre-trained BERT models by masking out the target words. BERT showed high scores on all test cases with subjectverb agreement and reflexive anaphora, except for sentences with relative clauses. The experiments were extended by Wolf (2019) by evaluating the OpenAI Generative Pre-trained Transformer (GPT) of Radford et al. (2018) . BERT outperformed the OpenAI GPT on the datasets from Linzen et al. (2016) and Goudalova et al. 2018, but not on the dataset from Marvin and Linzen (2018) .",
"cite_spans": [
{
"start": 100,
"end": 120,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF38"
},
{
"start": 852,
"end": 877,
"text": "(Jozefowicz et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 878,
"end": 902,
"text": "Marvin and Linzen, 2018;",
"ref_id": "BIBREF40"
},
{
"start": 903,
"end": 926,
"text": "Gulordava et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 1366,
"end": 1387,
"text": "Radford et al. (2018)",
"ref_id": "BIBREF45"
},
{
"start": 1444,
"end": 1464,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF38"
},
{
"start": 1520,
"end": 1544,
"text": "Marvin and Linzen (2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The problem of data scarcity can be addressed by generating artificial training data. Among the existing approaches are oversampling a small learner corpus (Junczys-Dowmunt et al., 2018; Aprosio et al., 2019) , utilizing additional resources, such as Wikipedia edits (Grundkiewicz and Junczys-Dowmunt, 2014; Boyd, 2018) , or introducing natural and synthetic noise into error-free data (Belinkov and Bisk, 2017; Felice and Yuan, 2014) . Natural noise means harvesting naturally occurring errors from the available corpora and creating a look-up table of possible replacements. Using natural noise also tries to imitate the distribution of errors in the available learner corpora. Synthetic noise can be generated by probabilistically injecting characterlevel or word-level noise into the source sentence, as shown in (Lichtarge et al., 2019; Kiyono et al., 2019; Zhao et al., 2019) .",
"cite_spans": [
{
"start": 156,
"end": 186,
"text": "(Junczys-Dowmunt et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 187,
"end": 208,
"text": "Aprosio et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 267,
"end": 307,
"text": "(Grundkiewicz and Junczys-Dowmunt, 2014;",
"ref_id": "BIBREF15"
},
{
"start": 308,
"end": 319,
"text": "Boyd, 2018)",
"ref_id": "BIBREF5"
},
{
"start": 386,
"end": 411,
"text": "(Belinkov and Bisk, 2017;",
"ref_id": "BIBREF3"
},
{
"start": 412,
"end": 434,
"text": "Felice and Yuan, 2014)",
"ref_id": "BIBREF11"
},
{
"start": 817,
"end": 841,
"text": "(Lichtarge et al., 2019;",
"ref_id": "BIBREF37"
},
{
"start": 842,
"end": 862,
"text": "Kiyono et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 863,
"end": 881,
"text": "Zhao et al., 2019)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Synthetic error generation based on confusion sets extracted from a spellchecker was used by one of the top-scoring systems at the Restricted and the Low Resource tracks at the BEA-2019 Shared task (Grundkiewicz et al., 2019) . Both tracks suppose limited use of available learner corpora. This method was compared in (White and Rozovskaya, 2020) with another top scoring approach (Choe et al., 2019) which relies on tokenbased and POS-based confusion sets extracted from a small annotated sample of the W&I +LOCNESS dataset (Yannakoudakis et al., 2018) . Extensive evaluation showed that the methods are better suited for correcting different types of errors. In general, the token-and POS-based pattern method demonstrated stronger results.",
"cite_spans": [
{
"start": 198,
"end": 225,
"text": "(Grundkiewicz et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 318,
"end": 346,
"text": "(White and Rozovskaya, 2020)",
"ref_id": "BIBREF56"
},
{
"start": 381,
"end": 400,
"text": "(Choe et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 525,
"end": 553,
"text": "(Yannakoudakis et al., 2018)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "If enough training data is available, errors can be generated by back-translation from correct data to data with errors (reverse error correction), which can be modified by additional random noise Kasewa et al., 2018; Xie et al., 2018; Kiyono et al., 2019) .",
"cite_spans": [
{
"start": 197,
"end": 217,
"text": "Kasewa et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 218,
"end": 235,
"text": "Xie et al., 2018;",
"ref_id": "BIBREF58"
},
{
"start": 236,
"end": 256,
"text": "Kiyono et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "First, we describe our real learner data. This data was used as the test set for all experiments presented below. Then, we present the method for generating ungrammatical data for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "While students perform exercises using the Revita language-learning platform, it continuously collects 3 and automatically annotates ReLCo-the longitudinal Revita Learner Corpus (Katinskaia et al., 2020) , where each record includes:",
"cite_spans": [
{
"start": 178,
"end": 203,
"text": "(Katinskaia et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learner Data",
"sec_num": "3.1"
},
{
"text": "\u2022 an authentic learner error in the context; \u2022 unique anonymized internal identifiers (ID) of the learner; \u2022 the type of exercise which was practiced. Revita generates exercises-\"cloze\", multiplechoice, listening, etc.-with hidden words in each paragraph. Learner answers that differ from the expected answers but have the same lemma are automatically flagged as grammatical errors, e.g., \u0435\u043b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learner Data",
"sec_num": "3.1"
},
{
"text": "Sentences Tokens Errors per sentence Grammatical errors Correct Real 7 869 120 420 1.9 4 704 693 Simulated 6 891 517 106 767 033 1.7 11 510 977 - Table 3 : The real dataset collected from learners, and the simulated dataset. The column \"Correct\" shows learner answers, which were manually labeled as correct by the annotators. \"(he) ate\" in place of \u0435\u0441\u0442 \"(she/he/it) eats\". Our goal is to improve this step-we aim to provide to the learners better feedback on the grammatical correctness of their answers, and in addition to improve the quality of automatic annotation. Exercises include words of various parts of speech (POS); in this work, we focus only on the inflected POSs. A total of 10K of such flagged answers were manually checked, and annotated as correct or incorrect. Annotation was performed by two native speakers, with 91% agreement. Cases where annotators did not agree were resolved by consensus. Answers with spelling errors or with incorrect lemmas are ignored, since we focus only on grammatical errors (see the most frequent types in Table 2 ). We label as \"unsure\" cases when we could not decide whether the answer is correct. There were 194 such answers (2% of the annotated data), and they were not used for evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 153,
"text": "Table 3",
"ref_id": null
},
{
"start": 1055,
"end": 1062,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "We assume that the context for annotation is one paragraph-earlier text is not used; all following sentences are also ignored (because they are not seen by the learner at the time of practice). 4 It is important to note that we annotate jointly all answerswhich may affect each other-given by the learner in the paragraph at the same time during practice. In total, we have collected 3004 paragraphs, with an average of 2.6 sentences per paragraph. We include the same paragraph in the data multiple times if it had different exercises when it was shown to the learners, or if the same exercises were given, but they were answered differently. Statistics about this dataset of real errors are given in Table 3 .",
"cite_spans": [
{
"start": 194,
"end": 195,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 702,
"end": 709,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "The manually annotated data is released to the community. 5 It includes the answers to exercises practiced in 2017-2020 by 150 learners, user IDs (anonymized), timestamps, and the corresponding correct sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "As the source of error-free data, we used the open-source \"Taiga\" Russian corpus (Shavrina and Shapovalova, 2017) , which is arranged into several segments based on genre. We used all news segments and a part of a literary text segment. Details about the data are presented in Table 3 .",
"cite_spans": [
{
"start": 81,
"end": 113,
"text": "(Shavrina and Shapovalova, 2017)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generating Training Data",
"sec_num": "3.2"
},
{
"text": "Keeping in line with the current design of Revita's practice mode-where the learner may not change the word order, nor the number of words in the sentence, nor the lemma of the hidden wordwe generate errors by replacing some of the words by other random forms from their paradigms. During the pre-processing all sentences are parsed by a rule-based shallow parser, which is implemented as a component of Revita. It identifies which words belong to chunks-constructions based on syntactic agreement and government. We use about 30 types of chunks, e.g., Prep+Adj+Noun or Noun+Conj+Noun. 6 A synthetic sentence X is produced from a source sentence X = (x 1 , x i , ..., x n ) with n words by replacing the i-th word x i by a form from the paradigm of x i . The word is replaced, if: it has a valid morphological analysis; it is present in a frequency dictionary, which was computed from the entire \"Taiga\" corpus; and it has an inflected POS. Paradigms are generated by pymorphy2 (Korobov, 2015) . Using the paradigm as a confusion set is similar to the approach in (Yin et al., 2020) .",
"cite_spans": [
{
"start": 586,
"end": 587,
"text": "6",
"ref_id": null
},
{
"start": 978,
"end": 993,
"text": "(Korobov, 2015)",
"ref_id": "BIBREF33"
},
{
"start": 1064,
"end": 1082,
"text": "(Yin et al., 2020)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Training Data",
"sec_num": "3.2"
},
{
"text": "For every x i , we pick a random sample from the uniform distribution. The word x i is replaced, if it does not belong to a chunk and the picked value is above the threshold \u03b8 p = p(error) = 0.1. The word x i is also replaced, if it belongs to a chunk and the picked value is above the threshold \u03b8 p,c = p(error, chunk) = 0.04. The thresholds denote a probability of inserting an error, and their values were chosen to reflect the distributions of errors in chunks and single tokens in the learner data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Training Data",
"sec_num": "3.2"
},
{
"text": "We explore two ways to tackle the problem of scarce data: 1. use a LM in an unsupervised fashion to detect grammatical irregularities; 2. train GED models with supervision on synthetic data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "We evaluate BERT as a masked language model (MLM)-to check how well it can distinguish correct answers from grammatical errors in the annotated learner data by performing an unsupervised cloze test, similar to that described by Goldberg (2019). The pre-trained BERT Base 7 (Kuratov and Arkhipov, 2019) is used for all experiments.",
"cite_spans": [
{
"start": 273,
"end": 301,
"text": "(Kuratov and Arkhipov, 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT as a Masked Language Model",
"sec_num": "4.1"
},
{
"text": "Joint assessment of answers: We need to assess more than one target word jointly, because correctness depends on the joint fills in all exercises in a paragraph. Experiments described by Linzen et al. (2016) and Goldberg (2019) mask only one target word at a time in the original sentence and the sentence with the error. However, as the following example shows, two different sets of answers can suit the same context, as long as they are considered jointly. The words in the brackets are the hints (lemmas), which the user should replace: \"\u042f [\u0438\u0434\u0442\u0438] \u043f\u043e \u0443\u043b\u0438\u0446\u0435 \u0438 [\u0443\u0432\u0438\u0434\u0435\u0442\u044c] \u043f\u0443\u0434\u0435\u043b\u044f.\" (\"I [walk] down the street and [see] a poodle.\") The expected answers may be: \"\u042f \u0438\u0434\u0443 \u043f\u043e \u0443\u043b\u0438\u0446\u0435 \u0438 \u0432\u0438\u0436\u0443 \u043f\u0443\u0434\u0435\u043b\u044f.\" (\"I walk down the street and see a poodle.\") But the learner may provide different answers, which are alternatively correct, if inserted jointly: \"\u042f \u0448\u0451\u043b \u043f\u043e \u0443\u043b\u0438\u0446\u0435 \u0438 \u0443\u0432\u0438\u0434\u0435\u043b \u043f\u0443\u0434\u0435\u043b\u044f.\" (\"I walked down the street and saw a poodle.\")",
"cite_spans": [
{
"start": 187,
"end": 207,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT as a Masked Language Model",
"sec_num": "4.1"
},
{
"text": "We adapted the approach of (Linzen et al., 2016; Goldberg, 2019) to our setup and applied two masking strategies: 1. mask one target token in a sentence (Table 4, left side) before feeding it to BERT, and 2. mask multiple targets to be predicted jointly (right side). We use WordPiece (Schuster and Nakajima, 2012) to segment tokens for BERT, so some target words missing in the pre-trained model's vocabulary are split into sub-tokens. Because of this, we compared the mean log-probabilities of all of the target's sub-tokens. Acc err denotes accuracy calculated on MLM predictions for only erroneous answers. We also evaluated predictions using different BERT layers.",
"cite_spans": [
{
"start": 27,
"end": 48,
"text": "(Linzen et al., 2016;",
"ref_id": "BIBREF38"
},
{
"start": 49,
"end": 64,
"text": "Goldberg, 2019)",
"ref_id": "BIBREF14"
},
{
"start": 285,
"end": 314,
"text": "(Schuster and Nakajima, 2012)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT as a Masked Language Model",
"sec_num": "4.1"
},
{
"text": "Alternative-correct answers: The method of Linzen et al. (2016) and Goldberg (2019) is based on comparing the model's probabilities predicted for the original word and for the replacement, with the assumption that the replacement is incorrect. This gives us only the absolute difference in probabilities returned by the LM, which cannot be used to determine whether the learner's answer is also correct in the context. When comparing BERT's predictions for the masked original word and the masked alternative-correct word, we conjecture that the model recognizes an alternative answer as grammatical if its predicted probability is at least as high as the probability of the expected answer. We also applied two masking strategies (one target vs. multiple targets), see accuracy Acc corr in Table 4 .",
"cite_spans": [
{
"start": 43,
"end": 63,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF38"
},
{
"start": 68,
"end": 83,
"text": "Goldberg (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 791,
"end": 798,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "BERT as a Masked Language Model",
"sec_num": "4.1"
},
{
"text": "Following prior experiments-which show that fine-tuning BERT for NER (Peters et al., 2019) and error detection (Kaneko et al., 2020) gives better performance than using the contextual representation of words from pre-trained BERT-we also fine-tune the pre-trained model. We modified the Huggingface Pytorch implementation of BERT for token classification and the code for the NER task 8 (Debut et al., 2019) . Hyper-parameters for fine-tuning BERT are the same as for the NER task: maximum number of epochs is 3, maximum input sequence length is 256, dropout rate is 0.1, batch size is 32, Adam optimizer, and the initial learning rate is set to 5E-5. We split the generated dataset into a training set, a development set, and a test set. Real learner data was not used for optimizing hyper-parameters or regularization-only for the final testing. Tokens: To process words, we did not use the only first sub-token per token, as is usually done when fine-tuning BERT for NER, but assigned the error/correct label of the entire token to all of its sub-tokens. We also tried labeling as errors only those sub-tokens that are actually erroneous, but that did not improve performance. This may be due to the segmentation and BERT's deficiency in capturing morphological features. Training sequence: We experimented with using one sentence as the training instance (padded or cut to the maximum input length). However, using a paragraph as input decreases training time and gives better performance (see Table 5 , where s denotes sentence instances and p means paragraph instances). The results were the same with paragraph length from 128 up to 256. Layers: As multiple studies show that syntactic information is most prominent in the middle layers (6-9 for BERT-base), while the final layer is the most task-specific (Rogers et al., 2020; Yin et al., 2020) , we also experimented with middle layers from several models with 12, 8, and 6 layers. For the classification task, we use a softmax output layer.",
"cite_spans": [
{
"start": 69,
"end": 90,
"text": "(Peters et al., 2019)",
"ref_id": "BIBREF43"
},
{
"start": 111,
"end": 132,
"text": "(Kaneko et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 387,
"end": 407,
"text": "(Debut et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 1813,
"end": 1834,
"text": "(Rogers et al., 2020;",
"ref_id": "BIBREF52"
},
{
"start": 1835,
"end": 1852,
"text": "Yin et al., 2020)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [
{
"start": 1498,
"end": 1505,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Supervised Model Architecture",
"sec_num": "4.2"
},
{
"text": "Loss: The training data is very skewed-over 90% of tokens are correct words, so negative examples far outnumber the positive ones. This particularly complicates the process of training and evaluation. To handle this, we use the weighted cross-entropy loss, wCE. It is a variant of crossentropy where all classes are given weight coefficients:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Model Architecture",
"sec_num": "4.2"
},
{
"text": "wCE = \u2212 C c=1 w c p c logp c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Model Architecture",
"sec_num": "4.2"
},
{
"text": "where the weight of a class c is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Model Architecture",
"sec_num": "4.2"
},
{
"text": "w c = N CNc ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Model Architecture",
"sec_num": "4.2"
},
{
"text": "where N is the total number of samples in the dataset, C is the number of classes, and N c is the number of samples within the class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Model Architecture",
"sec_num": "4.2"
},
{
"text": "BERT as MLM: We calculated balanced accuracy scores bAcc for both classes-grammatical errors and alternative-correct answers (see Table 4 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "The results for the one-target approach are not strictly comparable with the results in Goldberg (2019) because our data includes grammatical errors in many syntactic relations, not only in subjectverb agreement or reflexive anaphora. However, we can conclude that the pre-trained BERT models capture syntactic-sensitive dependencies markedly worse for Russian than for English, especially if multiple target words are masked. Exploring different layers showed that the lower layers of the pre-trained model are weaker at detecting the errors. Figure 1 presents histograms and kernel density estimation over log-probabilities that BERT as MLM assigns to a. errors, b. correct words sampled randomly from the learner corpus (which were not exercised) and c. alternative-correct answers. These three groups of words clearly have different distributions, but they are not easily separable to assess the learner answers in a reliable fashion.",
"cite_spans": [],
"ref_spans": [
{
"start": 544,
"end": 552,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "Fine-tuned BERT: Table 5 presents the metrics calculated on all words in the test set and only on the target words (with superscript T )-i.e., words which were inserted by the learner: errors or alternative-correct answers. M CC ranges between [\u22121, 1], and is high if the model correctly predicts a high percentage of negative and positive instances (Powers, 2020). We use F 0.5 because it favors precision over recall; this is important for our task, since providing incorrect feedback on learner answers is far more harmful than no feedback at all. M CC and F 0.5 do not completely agree, be- Table 5 : Results of evaluation of fine-tuned BERT models on assessing grammatical correctness: CE-crossentropy loss, wCE-weighted cross-entropy; the numbers denote the number of layers; s-sentence training instance, p-paragraph training instance, t-scores after moving decision thresholds. M CC-Matthews correlation coefficient, P and R-macro-averaged precision and recall, F 0.5 and F 1 -macro F-measures, bAcc-balanced accuracy. Metrics are calculated for all tokens, except where superscript T -calculated only for the target words.",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 24,
"text": "Table 5",
"ref_id": null
},
{
"start": 595,
"end": 602,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Model M CC P R F 0.5 F 1 P T R T F T 0.5 F T 1 bAcc T",
"eq_num": "CE+12+p"
}
],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "cause F 0.5 does not take into account the true negatives. We report macro-averaged scores, as they reflect how well the model performs for all classes, which is important for our task-assessing the erroneous vs. the alternative-correct answers. Macroaveraging treats all classes as equal, including the minority class-grammatical errors. We calculate the balanced accuracy bAcc T on the target tokens (errors and alternative-correct) for the fine-tuned models for comparison with BERT as MLM. We consider a word to be tagged as an error if at least one of its sub-segments was tagged as an error. The fine-tuned models show better results on evaluating the correctness of learner answers. The best performing models are highlighted. We also report the metrics for the best 4 models after moving the decision thresholds (denoted by t in the model name), chosen based on the highest values of F 0.5 and M CC. The thresholds are shown in Table 6 . All evaluation methods show that training with paragraphs outperforms training with sentence instances. This may be due to the wider context available during training and evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 936,
"end": 943,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "On the target positions, the fine-tuned models perform better with 8 layers, regardless of the loss function, which is consistent with experiments for English (Yin et al., 2020) . 9 Performance on the target positions is worse than for all tokens because all 9 Results with 6 layers are the worst for all models and are not reported in Table 5 .",
"cite_spans": [
{
"start": 159,
"end": 177,
"text": "(Yin et al., 2020)",
"ref_id": "BIBREF60"
},
{
"start": 259,
"end": 260,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 336,
"end": 343,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "All models, even trained considering unbalanced data, tend to predict more often that a word is correct, which is true for most of the tokens in a paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We use the model proposed for GED by (Bell et al., 2019) as the baseline-a bi-LSTM trained with a second LM objective and combined with a character-level bi-LSTM model. We took the best performing configuration, which utilizes BERT contextual embeddings. The baseline was trained only on the real learner datasets with cross-validation (CV). We used RULEC-GEC (see Table 9 ) as a second dataset to evaluate how well our fine-tuned model can generalize on other learner corpora, despite the fact that the synthetic training dataset was generated to imitate our learner data. RULEC-GEC is a corrected and error-tagged corpus of learner writing. It is almost double in size, has different error types and higher error rate than in our learner dataset. We performed evaluation on all types of Table 7 : Macro precision, recall, F 0.5 , and F 1 evaluated on our learner dataset and RULEC-GEC. \"Baseline\" refers to a retrained model by (Bell et al., 2019) , with using BERT contextualized embeddings. BERT refers to the finetuned models, with CE-loss and 12 layers.",
"cite_spans": [
{
"start": 37,
"end": 56,
"text": "(Bell et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 930,
"end": 949,
"text": "(Bell et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 365,
"end": 372,
"text": "Table 9",
"ref_id": null
},
{
"start": 789,
"end": 796,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with other models:",
"sec_num": null
},
{
"text": "Noun Adj. Verb Pron. Num. CE+12+p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "82.0 79.7 67.9 77.9 73.2 CE+8+p 82.6 79.9 67.9 77.2 70.7 wCE+12+p 86.7 84.3 68.9 87.8 73.2 wCE+8+p 87.9 85.1 70.0 85.1 80.5 Table 8 : Accuracy of predicting correctness on the target positions for different parts of speech by the models fine-tuned on synthetic data.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "12 480 13 047 6.3% Table 9 : Statistics for the data in RULEC-GEC.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tokens Sentences Errors Total error rate 206 258",
"sec_num": null
},
{
"text": "replacement and deletion errors in RULEC-GEC, not only inflection errors. BERT, fine-tuned on synthetic data, performs comparably with the baseline (see Table 7 ). It has worse results on RULEC-GEC; however, it is mostly unable to detect spelling errors, as well as other error types, which were not present in the synthetic dataset (preposition, conjunction, and insertion/deletion errors). In combination with the Deep Pavlov spelling correction pipeline, 10 the finetuned model can achieve much higher performance without any additional training.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 160,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tokens Sentences Errors Total error rate 206 258",
"sec_num": null
},
{
"text": "We also experimented with fine-tuning BERT on original learner data with CV. For our dataset, results of the model fine-tuned solely on synthetic data are comparable with the model fine-tuned and tested on the original data with CV. Moreover, recall is better for the model trained on synthetic data. The reason for this might be that our learner data is too scarce. The BERT model fine-tuned and tested solely on RULEC-GEC with CV achieves much better results than any of the other tested systems. We 10 docs.deeppavlov.ai performed this evaluation primarily to compare the performance of fine-tuned BERT and the baseline on the same dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokens Sentences Errors Total error rate 206 258",
"sec_num": null
},
{
"text": "Model confidence: To evaluate confidence of the fine-tuned models, we apply the method of Monte Carlo dropout (Gal and Ghahramani, 2016) . By keeping dropout activated at test time, we can repeatedly sample T predictions for every input and estimate the predictive uncertainty by measuring the variance and entropy of the scores. We sampled T = 20 scores of the BERT model (CE-loss, 12 layers) fine-tuned on synthetic data for each test input and calculated their variance and entropy.",
"cite_spans": [
{
"start": 110,
"end": 136,
"text": "(Gal and Ghahramani, 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokens Sentences Errors Total error rate 206 258",
"sec_num": null
},
{
"text": "A deeper analysis is beyond the scope of the paper, but we observe that the scores have higher uncertainty when the models make mistakes in the predictions, see Figure 2 . To use the model in Revita, we can compute the entropy of predicted scores and disregard the predictions when the entropy is high. In that case, we can fall back on our standard procedure of evaluation of learner answers. One disadvantage of this method is that it increases the inference time by a factor of T .",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 169,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Tokens Sentences Errors Total error rate 206 258",
"sec_num": null
},
{
"text": "Error Analysis: Analysis of errors shows multiple problems experienced by all fine-tuned BERT models. The most prominent are due to inverted word order and long-range dependencies; many verb forms are classified incorrectly (see Table 8 ), rare names and non-Cyrillic words are mostly classified as errors as well. This result is consistent with the previous research, which showed that finetuned BERT struggles with word order errors and verb forms for English (Yin et al., 2020) .",
"cite_spans": [
{
"start": 462,
"end": 480,
"text": "(Yin et al., 2020)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [
{
"start": 229,
"end": 236,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tokens Sentences Errors Total error rate 206 258",
"sec_num": null
},
{
"text": "Errors are frequently related to conjoined elements in the sentence. For instance, in case of two subjects with one common predicate (e.g., \"Peter and John talk on the phone every day.\"), BERT cannot detect errors in the number of the predicate (\"talk\" vs. \"talks\"). Moreover, BERT often marks an erroneous word as an error along with other words syntactically related to it. This applies to both shallow and long-range relations. So, the presence of an error affects other words which are syntactically related to the erroneous one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokens Sentences Errors Total error rate 206 258",
"sec_num": null
},
{
"text": "Another interesting problem related to multiple valid possibilities to correct an erroneous sentence. The models for GED, which we have been experimenting with, have no information about where the learners' answers are located in the sentence. In some erroneous sentences, it is possible to correct a hypothetical error (not a wrong learner's answer) and to obtain a corrected sentence with a meaning which is different from the original one but also grammatically valid. When labeling such sentences, fine-tuned BERT can consider an erroneous answer as correct and predict other words in the sentence as errors, which do not agree with the inserted form. For example: \"\u042f \u0431\u044b\u043b \u0432 \u0410\u0444\u0440\u0438\u043a\u0435 \u0438 \u043c\u0435\u043d\u044f \u0442\u0430\u043c \u043a\u0440\u043e\u043a\u043e\u0434\u0438\u043b\u0430 \u0441\u044a\u0435\u043b.\" (\"I was in Africa and I was eaten by a crocodile.\"). The highlighted word \"crocodile\" should be in the nominative rather than accusative case. However, BERT predicts the word \"\u043c\u0435\u043d\u044f\" (\"me\") as error, likely expecting the nominative \"\u044f\" (\"I\"), i.e., changing the meaning of the sentence to \"I was in Africa and I ate a crocodile.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokens Sentences Errors Total error rate 206 258",
"sec_num": null
},
{
"text": "We present a study on assessing grammatical correctness in the context of language learning. Our focus is on assessing alternative-correct answers to cloze exercises-such answers are given more frequently by the more advanced learners. This work was done with the Russian version of Revita, using a learner corpus collected automatically and annotated manually. We release the corpus to the research community with this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "The motivation behind approaching the problem of alternative-correct answers as GED is based on the hypothesis that models for error detection can assess the correctness of potentially valid answers. Because learner data is limited, we experimented with pre-trained BERT as a MLM, and with several BERT models fine-tuned on synthetic data, which we generated for the task. The evaluation shows that the pre-trained BERT is not able to assess grammatical correctness of learner answers; the performance for Russian is considerably lower than for similar experiments with English. Comparison with a baseline model and evaluation on another leaner corpus demonstrates that fine-tuning on synthetic data is a promising approach and generalizes well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "We plan to improve the generation of synthetic data based on error analysis, to cover a wider range of error types, and continue work on estimation of the confidence of the model predictions, since it is critical to provide reliable feedback to the learners. We also plan to specify the positions of answers as part of the model's input, which is natural for the exercise-oriented set-up in Revita.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "The annotated data is released with this paper. The dataset contains only replacement errors due to the current design of the practice mode in Revita.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Students are aware that data is collected when they register on the platform.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For example, if the gender of a pronoun in the current paragraph is answered as feminine, but from the previous paragraph we know that it should be masculine, we do not mark such an answer as an error, if it suits the context of the current paragraph.5 github.com/Askinkaty/Russian_learner_corpora",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For example, in Russian, as in many languages, prepositions govern nouns in a specific case; adjective and noun must agree in gender, number and case; etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "docs.deeppavlov.ai/en/master/features/models/bert.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "github.com/huggingface/transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the Academy of Finland, HIIT-Helsinki Institute for Information Technology, and Tulevaisuus Rahasto (Future Development Fund), University of Helsinki.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Future/Past 1 -Verb Number: Plural/Singular 2 -Noun Gender: Masculine/Feminine 3 -Noun Case: Nominative/Accusative 4 -Verb Tense: Past/Present 5 -Noun Case: Nominative/Instrumental 6 -Verb form: Finite/Transgressive 7 -Noun Number: Singular/Plural 8 -Verb Number: Singular/Plural 9 -Verb Tense: Past/Future 10 -Noun Case: Nominative/Genitive 11 -Verb Tense: Present/Past 12 -Adjective Number: Singular/Plural 13 -Noun Case: Genitive/Nominative 14 -Noun Number: Plural/Singular 15 -Noun Case: Nominative/Dative 16 -Noun Case: Accusative/Nominative",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "-Verb Tense: Future/Past 1 -Verb Number: Plural/Singular 2 -Noun Gender: Masculine/Feminine 3 -Noun Case: Nominative/Accusative 4 -Verb Tense: Past/Present 5 -Noun Case: Nominative/Instrumental 6 -Verb form: Finite/Transgressive 7 -Noun Number: Singular/Plural 8 -Verb Number: Singular/Plural 9 -Verb Tense: Past/Future 10 -Noun Case: Nominative/Genitive 11 -Verb Tense: Present/Past 12 -Adjective Number: Singular/Plural 13 -Noun Case: Genitive/Nominative 14 -Noun Number: Plural/Singular 15 -Noun Case: Nominative/Dative 16 -Noun Case: Accusative/Nominative",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Contextual string embeddings for sequence labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural text simplification in low-resource conditions using weak supervision",
"authors": [
{
"first": "Alessio",
"middle": [],
"last": "Palmero Aprosio",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Mattia A Di",
"middle": [],
"last": "Gangi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation",
"volume": "",
"issue": "",
"pages": "37--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessio Palmero Aprosio, Sara Tonelli, Marco Turchi, Matteo Negri, and Mattia A Di Gangi. 2019. Neural text simplification in low-resource conditions using weak supervision. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 37-44.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.02173"
]
},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine transla- tion. arXiv preprint arXiv:1711.02173.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Context is key: Grammatical error detection with contextual word representations",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Bell",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.06593"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel Bell, Helen Yannakoudakis, and Marek Rei. 2019. Context is key: Grammatical error detection with contextual word representations. arXiv preprint arXiv:1906.06593.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using wikipedia edits in low resource grammatical error correction",
"authors": [
{
"first": "Adriane",
"middle": [],
"last": "Boyd",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adriane Boyd. 2018. Using wikipedia edits in low re- source grammatical error correction. In Proceed- ings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text, pages 79- 84.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improving the efficiency of grammatical error correction with erroneous span detection and correction",
"authors": [
{
"first": "Mengyun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Xingxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.03260"
]
},
"num": null,
"urls": [],
"raw_text": "Mengyun Chen, Tao Ge, Xingxing Zhang, Furu Wei, and Ming Zhou. 2020. Improving the efficiency of grammatical error correction with erroneous span detection and correction. arXiv preprint arXiv:2010.03260.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Detection of grammatical errors involving prepositions",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "Joel",
"middle": [
"R"
],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Na-Rae",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the fourth ACL-SIGSEM workshop on prepositions",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Chodorow, Joel R Tetreault, and Na-Rae Han. 2007. Detection of grammatical errors involving prepositions. In Proceedings of the fourth ACL- SIGSEM workshop on prepositions, pages 25-30. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A neural grammatical error correction system built on better pre-training and sequential transfer learning",
"authors": [
{
"first": "Yo Joong",
"middle": [],
"last": "Choe",
"suffix": ""
},
{
"first": "Jiyeon",
"middle": [],
"last": "Ham",
"suffix": ""
},
{
"first": "Kyubyong",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Yeoil",
"middle": [],
"last": "Yoon",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.01256"
]
},
"num": null,
"urls": [],
"raw_text": "Yo Joong Choe, Jiyeon Ham, Kyubyong Park, and Yeoil Yoon. 2019. A neural grammatical error correc- tion system built on better pre-training and sequential transfer learning. arXiv preprint arXiv:1907.01256.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, , Jamie Brew, and Thomas Wolf. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. arXiv preprint arXiv:1910.03771.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generating artificial errors for grammatical error correction",
"authors": [
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "116--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mariano Felice and Zheng Yuan. 2014. Generating ar- tificial errors for grammatical error correction. In Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 116- 126.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Parsing illformed text using an error grammar",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2004,
"venue": "Artificial Intelligence Review",
"volume": "21",
"issue": "3-4",
"pages": "269--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Foster and Carl Vogel. 2004. Parsing ill- formed text using an error grammar. Artificial In- telligence Review, 21(3-4):269-291.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2016,
"venue": "international conference on machine learning",
"volume": "",
"issue": "",
"pages": "1050--1059",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncer- tainty in deep learning. In international conference on machine learning, pages 1050-1059.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Assessing BERT's syntactic abilities",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.05287"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2019. Assessing BERT's syntactic abil- ities. arXiv preprint arXiv:1901.05287.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The WikEd error corpus: A corpus of corrective wikipedia edits and its application to grammatical error correction",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "478--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Grundkiewicz and Marcin Junczys-Dowmunt. 2014. The WikEd error corpus: A corpus of cor- rective wikipedia edits and its application to gram- matical error correction. In International Confer- ence on Natural Language Processing, pages 478- 490. Springer.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural grammatical error correction systems with unsupervised pre-training on synthetic data",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "252--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Ed- ucational Applications, pages 252-263.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Colorless green recurrent networks dream hierarchically",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Gulordava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.11138"
]
},
"num": null,
"urls": [],
"raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Color- less green recurrent networks dream hierarchically. arXiv preprint arXiv:1803.11138.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Modeling language learning using specialized Elo ratings",
"authors": [
{
"first": "Jue",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Maximilian",
"middle": [
"W"
],
"last": "Koppatz",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [],
"last": "Mar\u0131a Hoya",
"suffix": ""
},
{
"first": "Nataliya",
"middle": [],
"last": "Quecedo",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Stoyanova",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Kopotev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yangarber",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, ACL: 56th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "494--506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jue Hou, Maximilian W Koppatz, Jos\u00e9 Mar\u0131a Hoya Que- cedo, Nataliya Stoyanova, Mikhail Kopotev, and Ro- man Yangarber. 2019. Modeling language learning using specialized Elo ratings. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, ACL: 56th an- nual meeting of the Association for Computational Linguistics, pages 494-506.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Exploring the limits of language modeling",
"authors": [
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.02410"
]
},
"num": null,
"urls": [],
"raw_text": "Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Approaching neural grammatical error correction as a low-resource machine translation task",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Shubha",
"middle": [],
"last": "Guha",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.05940"
]
},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Ap- proaching neural grammatical error correction as a low-resource machine translation task. arXiv preprint arXiv:1804.05940.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multihead multi-layer attention to deep language representations for grammatical error detection",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.07334"
]
},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko and Mamoru Komachi. 2019. Multi- head multi-layer attention to deep language repre- sentations for grammatical error detection. arXiv preprint arXiv:1904.07334.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Mita",
"suffix": ""
},
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00987"
]
},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked lan- guage models in grammatical error correction. arXiv preprint arXiv:2005.00987.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Grammatical error detection using error-and grammaticality-specific word embeddings",
"authors": [
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Yuya",
"middle": [],
"last": "Sakaizawa",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masahiro Kaneko, Yuya Sakaizawa, and Mamoru Ko- machi. 2017. Grammatical error detection using error-and grammaticality-specific word embeddings.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "40--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the Eighth International Joint Con- ference on Natural Language Processing (Volume 1: Long Papers), pages 40-48.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Wronging a right: Generating better errors to improve grammatical error detection",
"authors": [
{
"first": "Sudhanshu",
"middle": [],
"last": "Kasewa",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.00668"
]
},
"num": null,
"urls": [],
"raw_text": "Sudhanshu Kasewa, Pontus Stenetorp, and Sebastian Riedel. 2018. Wronging a right: Generating better errors to improve grammatical error detection. arXiv preprint arXiv:1810.00668.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Toward a paradigm shift in collection of learner corpora",
"authors": [
{
"first": "Anisia",
"middle": [],
"last": "Katinskaia",
"suffix": ""
},
{
"first": "Sardana",
"middle": [],
"last": "Ivanova",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "386--391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anisia Katinskaia, Sardana Ivanova, and Roman Yan- garber. 2020. Toward a paradigm shift in collection of learner corpora. In Proceedings of The 12th Lan- guage Resources and Evaluation Conference, pages 386-391.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Multiple admissibility in language learning: Judging grammaticality using unlabeled data",
"authors": [
{
"first": "Anisia",
"middle": [],
"last": "Katinskaia",
"suffix": ""
},
{
"first": "Sardana",
"middle": [],
"last": "Ivanova",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
}
],
"year": 2019,
"venue": "The 7th Workshop on Balto-Slavic Natural Language Processing Proceedings of the Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anisia Katinskaia, Sardana Ivanova, Roman Yangarber, et al. 2019. Multiple admissibility in language learn- ing: Judging grammaticality using unlabeled data. In The 7th Workshop on Balto-Slavic Natural Lan- guage Processing Proceedings of the Workshop. The Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Revita: a system for language learning and supporting endangered languages",
"authors": [
{
"first": "Anisia",
"middle": [],
"last": "Katinskaia",
"suffix": ""
},
{
"first": "Javad",
"middle": [],
"last": "Nouri",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
}
],
"year": 2017,
"venue": "6th Workshop on NLP for CALL and 2nd Workshop on NLP for Research on Language Acquisition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anisia Katinskaia, Javad Nouri, and Roman Yangarber. 2017. Revita: a system for language learning and supporting endangered languages. In 6th Workshop on NLP for CALL and 2nd Workshop on NLP for Research on Language Acquisition, at NoDaLiDa, Gothenburg, Sweden.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Revita: a language-learning platform at the intersection of ITS and CALL",
"authors": [
{
"first": "Anisia",
"middle": [],
"last": "Katinskaia",
"suffix": ""
},
{
"first": "Javad",
"middle": [],
"last": "Nouri",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anisia Katinskaia, Javad Nouri, Roman Yangarber, et al. 2018. Revita: a language-learning platform at the intersection of ITS and CALL. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Digital cultural heritage and revitalization of endangered Finno-Ugric languages",
"authors": [
{
"first": "Anisia",
"middle": [],
"last": "Katinskaia",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 3rd Conference on Digital Humanities in the Nordic Countries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anisia Katinskaia and Roman Yangarber. 2018. Dig- ital cultural heritage and revitalization of endan- gered Finno-Ugric languages. In Proceedings of the 3rd Conference on Digital Humanities in the Nordic Countries, Helsinki, Finland.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Stronger baselines for grammatical error correction using pretrained encoder-decoder model",
"authors": [
{
"first": "Satoru",
"middle": [],
"last": "Katsumata",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.11849"
]
},
"num": null,
"urls": [],
"raw_text": "Satoru Katsumata and Mamoru Komachi. 2020. Stronger baselines for grammatical error correction using pretrained encoder-decoder model. arXiv preprint arXiv:2005.11849.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "An empirical study of incorporating pseudo data into grammatical error correction",
"authors": [
{
"first": "Shun",
"middle": [],
"last": "Kiyono",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Mita",
"suffix": ""
},
{
"first": "Tomoya",
"middle": [],
"last": "Mizumoto",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.00502"
]
},
"num": null,
"urls": [],
"raw_text": "Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizu- moto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical error correction. arXiv preprint arXiv:1909.00502.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Morphological analyzer and generator for Russian and Ukrainian languages",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Korobov",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Analysis of Images, Social Networks and Texts",
"volume": "",
"issue": "",
"pages": "320--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Korobov. 2015. Morphological analyzer and generator for Russian and Ukrainian languages. In International Conference on Analysis of Images, So- cial Networks and Texts, pages 320-332. Springer.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Adaptation of deep bidirectional multilingual transformers for Russian language",
"authors": [
{
"first": "Yuri",
"middle": [],
"last": "Kuratov",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Arkhipov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.07213"
]
},
"num": null,
"urls": [],
"raw_text": "Yuri Kuratov and Mikhail Arkhipov. 2019. Adap- tation of deep bidirectional multilingual trans- formers for Russian language. arXiv preprint arXiv:1905.07213.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Correcting misuse of verb forms",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "174--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lee and Stephanie Seneff. 2008. Correcting mis- use of verb forms. In Proceedings of ACL-08: HLT, pages 174-182.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Towards minimal supervision BERTbased grammar error correction",
"authors": [
{
"first": "Yiyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.03521"
]
},
"num": null,
"urls": [],
"raw_text": "Yiyuan Li, Antonios Anastasopoulos, and Alan W Black. 2020. Towards minimal supervision BERT- based grammar error correction. arXiv preprint arXiv:2001.03521.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Corpora generation for grammatical error correction",
"authors": [
{
"first": "Jared",
"middle": [],
"last": "Lichtarge",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Tong",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.05780"
]
},
"num": null,
"urls": [],
"raw_text": "Jared Lichtarge, Chris Alberti, Shankar Kumar, Noam Shazeer, Niki Parmar, and Simon Tong. 2019. Cor- pora generation for grammatical error correction. arXiv preprint arXiv:1904.05780.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Exploiting unlabeled data for neural grammatical error detection",
"authors": [
{
"first": "Zhuo-Ran",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Computer Science and Technology",
"volume": "32",
"issue": "4",
"pages": "758--767",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuo-Ran Liu and Yang Liu. 2017. Exploiting un- labeled data for neural grammatical error detec- tion. Journal of Computer Science and Technology, 32(4):758-767.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Targeted syntactic evaluation of language models",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Marvin",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.09031"
]
},
"num": null,
"urls": [],
"raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. arXiv preprint arXiv:1808.09031.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Grammatical error correction in low-resource scenarios",
"authors": [
{
"first": "Jakub",
"middle": [],
"last": "N\u00e1plava",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.00353"
]
},
"num": null,
"urls": [],
"raw_text": "Jakub N\u00e1plava and Milan Straka. 2019. Grammatical error correction in low-resource scenarios. arXiv preprint arXiv:1910.00353.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Semi-supervised sequence tagging with bidirectional language models",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Power",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1756--1765",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Waleed Ammar, Chandra Bhaga- vatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language mod- els. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, page 1756-1765.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "To tune or not to tune?",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Adapting pretrained representations to diverse tasks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.05987"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Sebastian Ruder, and Noah A Smith. 2019. To tune or not to tune? Adapting pretrained representations to diverse tasks. arXiv preprint arXiv:1903.05987.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation",
"authors": [
{
"first": "M",
"middle": [
"W"
],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Powers",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.16061"
]
},
"num": null,
"urls": [],
"raw_text": "David MW Powers. 2020. Evaluation: from pre- cision, recall and F-measure to ROC, informed- ness, markedness and correlation. arXiv preprint arXiv:2010.16061.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Salimans",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Semi-supervised multitask learning for sequence labeling",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.07156"
]
},
"num": null,
"urls": [],
"raw_text": "Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. arXiv preprint arXiv:1704.07156.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Attending to characters in neural sequence labeling models",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Ko Crichton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.04361"
]
},
"num": null,
"urls": [],
"raw_text": "Marek Rei, Gamal KO Crichton, and Sampo Pyysalo. 2016. Attending to characters in neural sequence la- beling models. arXiv preprint arXiv:1611.04361.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Artificial error generation with machine translation and syntactic patterns",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Felice",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.05236"
]
},
"num": null,
"urls": [],
"raw_text": "Marek Rei, Mariano Felice, Zheng Yuan, and Ted Briscoe. 2017. Artificial error generation with machine translation and syntactic patterns. arXiv preprint arXiv:1707.05236.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Jointly learning to label sentences and tokens",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6916--6923",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marek Rei and Anders S\u00f8gaard. 2019. Jointly learn- ing to label sentences and tokens. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6916-6923.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Compositional sequence labeling models for error detection in learner writing",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.06153"
]
},
"num": null,
"urls": [],
"raw_text": "Marek Rei and Helen Yannakoudakis. 2016. Composi- tional sequence labeling models for error detection in learner writing. arXiv preprint arXiv:1607.06153.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Auxiliary objectives for neural error detection models",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.05227"
]
},
"num": null,
"urls": [],
"raw_text": "Marek Rei and Helen Yannakoudakis. 2017. Auxiliary objectives for neural error detection models. arXiv preprint arXiv:1707.05227.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "A primer in bertology: What we know about how BERT works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.12327"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how BERT works. arXiv preprint arXiv:2002.12327.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Grammar error correction in morphologically rich languages: The case of Russian",
"authors": [
{
"first": "Alla",
"middle": [],
"last": "Rozovskaya",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "1--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alla Rozovskaya and Dan Roth. 2019. Grammar error correction in morphologically rich languages: The case of Russian. Transactions of the Association for Computational Linguistics, 7:1-17.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Japanese and Korean voice search",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Kaisuke",
"middle": [],
"last": "Nakajima",
"suffix": ""
}
],
"year": 2012,
"venue": "2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5149--5152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kaisuke Nakajima. 2012. Japanese and Korean voice search. In 2012 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149-5152. IEEE.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "To the methodology of corpus construction for machine learning",
"authors": [
{
"first": "Tatiana",
"middle": [],
"last": "Shavrina",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Shapovalova",
"suffix": ""
}
],
"year": 2017,
"venue": "Taiga\" syntax tree corpus and parser. Corpus Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tatiana Shavrina and Olga Shapovalova. 2017. To the methodology of corpus construction for machine learning: \"Taiga\" syntax tree corpus and parser. Cor- pus Linguistics, page 78.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "A comparative study of synthetic data generation methods for grammatical error correction",
"authors": [
{
"first": "Max",
"middle": [],
"last": "White",
"suffix": ""
},
{
"first": "Alla",
"middle": [],
"last": "Rozovskaya",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "198--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Max White and Alla Rozovskaya. 2020. A comparative study of synthetic data generation methods for gram- matical error correction. In Proceedings of the Fif- teenth Workshop on Innovative Use of NLP for Build- ing Educational Applications, pages 198-208.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Some additional experiments extending the tech report \"Assessing BERT's syntactic abilities\" by Yoav Goldberg",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf. 2019. Some additional experiments ex- tending the tech report \"Assessing BERT's syntactic abilities\" by Yoav Goldberg. Technical report.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Noising and denoising natural language: Diverse backtranslation for grammar correction",
"authors": [
{
"first": "Ziang",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Genthial",
"suffix": ""
},
{
"first": "Stanley",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "619--628",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziang Xie, Guillaume Genthial, Stanley Xie, Andrew Y Ng, and Dan Jurafsky. 2018. Noising and denoising natural language: Diverse backtranslation for gram- mar correction. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 619-628.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Developing an automated writing placement system for ESL learners",
"authors": [
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "\u00d8istein",
"suffix": ""
},
{
"first": "Ardeshir",
"middle": [],
"last": "Andersen",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Geranpayeh",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nicholls",
"suffix": ""
}
],
"year": 2018,
"venue": "Applied Measurement in Education",
"volume": "31",
"issue": "3",
"pages": "251--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helen Yannakoudakis, \u00d8istein E Andersen, Ardeshir Geranpayeh, Ted Briscoe, and Diane Nicholls. 2018. Developing an automated writing placement system for ESL learners. Applied Measurement in Educa- tion, 31(3):251-267.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "On the robustness of language encoders against grammatical errors",
"authors": [
{
"first": "Fan",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Quanyu",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.05683"
]
},
"num": null,
"urls": [],
"raw_text": "Fan Yin, Quanyu Long, Tao Meng, and Kai-Wei Chang. 2020. On the robustness of language en- coders against grammatical errors. arXiv preprint arXiv:2005.05683.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Ruoyu",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Jingming",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.00138"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving grammatical er- ror correction via pre-training a copy-augmented architecture with unlabeled data. arXiv preprint arXiv:1903.00138.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Histograms and kernel density estimations of log-probabilities assigned by BERT as MLM to: grammatical errors, randomly chosen correct words (not used in exercises), and alternative-correct answers."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Percentage of 17 the most frequent correctly assessed categories in alternative-correct answers (X-axis) vs. entropy of predicted scores (Y-axis), sampled from 20 BERT models fine-tuned on synthetic data."
},
"TABREF1": {
"html": null,
"num": null,
"text": "Most frequent grammatical errors and alternative-correct (AC) answers in the annotated dataset.",
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"html": null,
"num": null,
"text": "Accuracy of BERT as a MLM on detecting errors. The 3 left columns present results on masking only one target word in the sentence; the 3 right columns present masking multiple learner answers jointly; err and corr denote accuracy for masked grammatical errors and alternative-correct answers, respectively. Balanced accuracy (bAcc) is calculated for both classes: errors and alternative-correct answers.",
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"html": null,
"num": null,
"text": "Decision thresholds for best fine-tuned BERT models.",
"type_str": "table",
"content": "<table/>"
}
}
}
}