ACL-OCL / Base_JSON /prefixB /json /bea /2020.bea-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:10:31.446453Z"
},
"title": "Should You Fine-Tune BERT for Automated Essay Scoring?",
"authors": [
{
"first": "Elijah",
"middle": [],
"last": "Mayfield",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University",
"location": {}
},
"email": "elijah@cmu.edu"
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most natural language processing research now recommends large Transformer-based models with fine-tuning for supervised classification tasks; older strategies like bag-ofwords features and linear models have fallen out of favor. Here we investigate whether, in automated essay scoring (AES) research, deep neural models are an appropriate technological choice. We find that fine-tuning BERT produces similar performance to classical models at significant additional cost. We argue that while state-of-the-art strategies do match existing best results, they come with opportunity costs in computational resources. We conclude with a review of promising areas for research on student essays where the unique characteristics of Transformers may provide benefits over classical methods to justify the costs.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Most natural language processing research now recommends large Transformer-based models with fine-tuning for supervised classification tasks; older strategies like bag-ofwords features and linear models have fallen out of favor. Here we investigate whether, in automated essay scoring (AES) research, deep neural models are an appropriate technological choice. We find that fine-tuning BERT produces similar performance to classical models at significant additional cost. We argue that while state-of-the-art strategies do match existing best results, they come with opportunity costs in computational resources. We conclude with a review of promising areas for research on student essays where the unique characteristics of Transformers may provide benefits over classical methods to justify the costs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automated essay scoring (AES) mimics the judgment of educators evaluating the quality of student writing. Originally used for summative purposes in standardized testing and the GRE (Chen et al., 2016) , these systems are now frequently found in classrooms (Wilson and Roscoe, 2019) , typically enabled by training data scored on reliable rubrics to give consistent and clear goals for writers (Reddy and Andrade, 2010) .",
"cite_spans": [
{
"start": 181,
"end": 200,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 256,
"end": 281,
"text": "(Wilson and Roscoe, 2019)",
"ref_id": "BIBREF74"
},
{
"start": 393,
"end": 418,
"text": "(Reddy and Andrade, 2010)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More broadly, the natural language processing (NLP) research community in recent years has been dominated by deep neural network research, in particular, the Transformer architecture popularized by BERT (Devlin et al., 2019) . These models use large volumes of existing text data to pre-train multilayer neural networks with context-sensitive meaning of, and relations between, words. The models, which often consist of over 100 million parameters, are then fine-tuned to a specific new labeled dataset and used for classification, generation, or structured prediction.",
"cite_spans": [
{
"start": 203,
"end": 224,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research in AES, though, has tended to prioritize simpler models, usually multivariate regression using a small set of justifiable variables chosen by psychometricians (Attali and Burstein, 2004) . This produces models that retain direct mappings between variables and recognizable characteristics of writing, like coherence or lexical sophistication (Yannakoudakis and Briscoe, 2012; Vajjala, 2018) . In psychometrics more generally, this focus on features as valid \"constructs\" leans on a rigorous and well-defined set of principles (Attali, 2013) . This approach is at odds with Transformer-based research, and so our core question for this work is: for AES specifically, is a move to deep neural models worth the cost?",
"cite_spans": [
{
"start": 168,
"end": 195,
"text": "(Attali and Burstein, 2004)",
"ref_id": "BIBREF3"
},
{
"start": 351,
"end": 384,
"text": "(Yannakoudakis and Briscoe, 2012;",
"ref_id": "BIBREF77"
},
{
"start": 385,
"end": 399,
"text": "Vajjala, 2018)",
"ref_id": "BIBREF68"
},
{
"start": 535,
"end": 549,
"text": "(Attali, 2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The chief technical contribution of this work is to measure results for BERT when fine-tuned for AES. In section 3 we describe an experimental setup with multiple levels of technical difficulty from bag-of-words models to fine-tuned Transformers, and in section 5 we show that the approaches perform similarly. In AES, human interrater reliability creates a ceiling for scoring model accuracy. While Transformers match state-of-theart accuracy, they do so with significant tradeoffs; we show that this includes a slowdown in training time of up to 100x. Our data shows that these Transformer models improve on N -gram baselines by no more than 5%. Given this result, in section 6 we describe areas of contemporary research on Transformers that show both promising early results and a potential alignment to educational pedagogy beyond reliable scoring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In AES, student essays are scored either on a single holistic scale, or analytically following a rubric that breaks out subscores based on \"traits.\" These scores are almost always integer-valued, and almost universally have fewer than 10 possible score points, though some research has used scales with as many as 60 points (Shermis, 2014) . In most contexts, students respond to \"prompts,\" a specific writing activity with predefined content. Work in natural language processing and speech evaluation has used advanced features like discourse coherence (Wang et al., 2013) and argument extraction (Nguyen and Litman, 2018); for proficient writers in professional settings, automated scaffolds like grammatical error detection and correction also exist (Ng et al., 2014) .",
"cite_spans": [
{
"start": 324,
"end": 339,
"text": "(Shermis, 2014)",
"ref_id": "BIBREF61"
},
{
"start": 554,
"end": 573,
"text": "(Wang et al., 2013)",
"ref_id": "BIBREF70"
},
{
"start": 753,
"end": 770,
"text": "(Ng et al., 2014)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Natural language processing has historically used n-gram bag-of-words features to predict labels for documents. These were the standard representation of text data for decades and are still in widespread use (Jurafsky and Martin, 2014) . In the last decade, the field moved to word embeddings, where words are represented not as a single feature but as dense vectors learned from large unsupervised corpora. While early approaches to dense representations using latent semantic analysis have been a major part of the literature on AES (Foltz et al., 2000; Miller, 2003) , these were corpus-specific representations. In contrast, recent work is general-purpose, resulting in offthe-shelf representations like GloVe (Pennington et al., 2014) . This allows similar words to have approximately similar representations, effectively managing lexical sparsity.",
"cite_spans": [
{
"start": 208,
"end": 235,
"text": "(Jurafsky and Martin, 2014)",
"ref_id": "BIBREF28"
},
{
"start": 535,
"end": 555,
"text": "(Foltz et al., 2000;",
"ref_id": "BIBREF17"
},
{
"start": 556,
"end": 569,
"text": "Miller, 2003)",
"ref_id": "BIBREF39"
},
{
"start": 714,
"end": 739,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "But the greatest recent innovation has been contextual word embeddings, based on deep neural networks and in particular, Transformers. Rather than encoding a word's semantics as a static vector, these models adjust the representation of words based on their context in new documents. With multiple layers and sophisticated attention mechanisms (Bahdanau et al., 2015) , these newer models have outperformed the state-of-the-art on numerous tasks, and are currently the most accurate models on a very wide range of tasks (Vaswani et al., 2017; . The most popular architecture, BERT, produces a 768dimensional final embedding based on a network with over 100 million total parameters in 12 layers; pre-trained models are available for open source use (Devlin et al., 2019) .",
"cite_spans": [
{
"start": 344,
"end": 367,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 520,
"end": 542,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF69"
},
{
"start": 749,
"end": 770,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "For document classification, BERT is \"finetuned\" by adding a final layer at the end of the Transformer architecture, with one output neuron per class label. When learning from a new set of labeled training data, BERT evaluates the training set multiple times (each pass is called an epoch).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "A loss function, propagating backward to the network, allows the model to learn relationships between the class labels in the new data and the contextual meaning of the words in the text. A learning rate determines the amount of change to a model's parameters. Extensive results have shown that careful control of the learning rate in a curriculum can produce an effective fine-tuning process (Smith, 2018) . While remarkably effective, our community is only just beginning to identify exactly what is learned in this process; research in \"BERT-ology\" is ongoing (Kovaleva et al., 2019; Jawahar et al., 2019; Tenney et al., 2019) .",
"cite_spans": [
{
"start": 393,
"end": 406,
"text": "(Smith, 2018)",
"ref_id": "BIBREF63"
},
{
"start": 563,
"end": 586,
"text": "(Kovaleva et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 587,
"end": 608,
"text": "Jawahar et al., 2019;",
"ref_id": "BIBREF26"
},
{
"start": 609,
"end": 629,
"text": "Tenney et al., 2019)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "These neural models are just starting to be used in machine learning for AES, especially as an intermediate representation for automated essay feedback (Fiacco et al., 2019; Nadeem et al., 2019) . End-to-end neural AES models are in their infancy and have only seen exploratory studies like Rodriguez et al. (2019) ; to our knowledge, no commercial vendor yet uses Transformers as the representation for high-stakes automated scoring.",
"cite_spans": [
{
"start": 152,
"end": 173,
"text": "(Fiacco et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 174,
"end": 194,
"text": "Nadeem et al., 2019)",
"ref_id": "BIBREF41"
},
{
"start": 291,
"end": 314,
"text": "Rodriguez et al. (2019)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "To date, there are no best practices on finetuning Transformers for AES; in this section we present options. We begin with a classical baseline of traditional bag-of-words approaches and noncontextual word embeddings, used with Na\u00efve Bayes and logistic regression classifiers, respectively. We then describe three curriculum learning options for fine-tuning BERT using AES data based on broader best practices. We end with two approaches based on BERT but without finetuning, with reduced hardware requirements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLP for Automated Essay Scoring",
"sec_num": "3"
},
{
"text": "The simplest features for document classification tasks, \"bag-of-words,\" extracts surface N -grams of length 1-2 with \"one-hot\" binary values indicating presence or absence in a document. In prior AES results, this representation is surprisingly effective, and can be improved with simple extensions: N -grams based on part-of-speech tags (of length 2-3) to capture syntax independent of content, and character-level N -grams of length 3-4, to provide robustness to misspellings (Woods et al., 2017; Riordan et al., 2019) . This highdimensional representation typically has a cutoff threshold where rare tokens are excluded: in our implementation, we exclude N -grams without at least 5 occurrences in training data. Even after this reduction, this is a sparse feature space with thousands of dimensions. For learning with bagof-words, we use a Na\u00efve Bayes classifier with Laplace smoothing from Scikit-learn (Pedregosa et al., 2011) , with part-of-speech tagging from SpaCy (Honnibal and Montani, 2017) .",
"cite_spans": [
{
"start": 479,
"end": 499,
"text": "(Woods et al., 2017;",
"ref_id": "BIBREF75"
},
{
"start": 500,
"end": 521,
"text": "Riordan et al., 2019)",
"ref_id": "BIBREF56"
},
{
"start": 909,
"end": 933,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF46"
},
{
"start": 975,
"end": 1003,
"text": "(Honnibal and Montani, 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bag-of-Words Representations",
"sec_num": "3.1"
},
{
"text": "A more modern representation of text uses wordlevel embeddings. This produces a vector, typically of up to 300 dimensions, representing each word in a document. In our implementation, we represent each document as the term-frequencyweighted mean of word-level embedding vectors from GloVe (Pennington et al., 2014) . Unlike onehot bag-of-words features, embeddings have dense real-valued features and Na\u00efve Bayes models are inappropriate; we instead train a logistic regression classifier with the LibLinear solver (Fan et al., 2008) and L2 regularization from Scikit-learn.",
"cite_spans": [
{
"start": 289,
"end": 314,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF47"
},
{
"start": 515,
"end": 533,
"text": "(Fan et al., 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embeddings",
"sec_num": "3.2"
},
{
"text": "Moving to neural models, we fine-tune an uncased BERT model using the Fast.ai library. This library's visibility to first-time users of deep learning and accessible online learning materials 1 mean their default choices are the most accessible route for practitioners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning BERT",
"sec_num": "3.3"
},
{
"text": "Fast.ai recomends use of cyclical learning rate curricula for fine-tuning. In this policy, an upper and lower bound on learning rates are established. lr max is a hyperparameter defining the maximum learning rate in one epoch of learning. In cyclical learning, the learning rate for fine-tuning begins at the lower bound, rises to the upper bound, then descends back to the lower bound. A high learning rate midway through training acts as regularization, allowing the model to avoid overfitting and avoiding local optima. Lower learning rates at the beginning and end of cycles allow for optimization within a local optimum, giving the model an opportunity to discover fine-grained new information again. In our work, we set lr max = 0.00001. A lower bound is then derived from the upper bound, lr min = 0.04 * lr max ; this again is default behavior in the Fast.ai library.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning BERT",
"sec_num": "3.3"
},
{
"text": "We assess three different curricula for cyclical learning rates, visualized in Figure 1 . In the default approach, a maximum learning rate is set and cycles are repeated until reaching a threshold; for a halting criterion, we measure validation set accuracy. Because of noise in deep learning training, halting at any decrease can lead to premature stops; it is preferable to allow some occasional, small drop in performance. In our implementation we halt when accuracy on a validation set, measured in quadratic weighted kappa, decreases by over 0.01. In the second, \"two-rate\" approach (Smith, 2018), we follow this algorithm, but when we would halt, we instead backtrack by one epoch to a saved version of the network and restart training with a learning rate of 1 \u00d7 10 \u22126 (one order of magnitude smaller). Finally, in the \"1-cycle\" policy, training is condensed into a single rise-and-fall pattern, spread over N epochs. Defining the exact training time N is a hyperparameter tuned on validation data. Finally, while BERT is optimized for sentence encoding, it is able to process documents up to 512 words long. In our data, we truncate a small number of essays longer than this maximum, mostly in ASAP dataset #2.",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 87,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Fine-Tuning BERT",
"sec_num": "3.3"
},
{
"text": "Fine-tuning is computationally expensive and can only run on GPU-enabled devices. Many prac-titioners in low-resource settings may not have access to appropriate cloud computing environments for these techniques. Previous work has described a compromise approach for using Transformer models without fine-tuning. In Peters et al. (2019), the authors describe a new pipeline. Document texts are processed with an untuned BERT model; the final activations from network on the [CLS] token are then used directly as contextual word embeddings. This 768-dimensional feature vector represents the full document, and is used as inputs for a linear classifier. In the education context, a similar approach was described in Nadeem et al. (2019) as a baseline for evaluation of language-learner essays. This process allows us to use the world knowledge embedded in BERT without requiring fine-tuning of the model itself, and without need for GPUs at training or prediction time. For our work, we train a logistic regression classifier as described in Section 3.2.",
"cite_spans": [
{
"start": 715,
"end": 735,
"text": "Nadeem et al. (2019)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction from BERT",
"sec_num": "3.4"
},
{
"text": "Recent work has highlighted the extreme carbon costs of full Transformer fine-tuning (Strubell et al., 2019) and the desire for Transformer-based prediction on-device without access to cloud compute. In response to these concerns, Sanh et al. (2019) introduce DistilBERT, which they argue is equivalent to BERT in most practical aspects while reducing parameter size by 40% to 66 million, and decreasing model inference time by 60%. This is accomplished using a distillation method (Hinton et al., 2015) in which a new, smaller \"student\" network is trained to reproduce the behavior of a pretrained \"teacher\" network. Once the smaller model is pretrained, interacting with it for the purposes of fine-tuning is identical to interacting with BERT directly. DistilBERT is intended for use cases where compute resources are a constraint, sacrificing a small amount of accuracy for a drastic shrinking of network size. Because of this intended use case, we only present results for DistilBERT with the \"1-cycle\" learning rate policy, which is drastically faster to fine-tune.",
"cite_spans": [
{
"start": 85,
"end": 108,
"text": "(Strubell et al., 2019)",
"ref_id": "BIBREF64"
},
{
"start": 231,
"end": 249,
"text": "Sanh et al. (2019)",
"ref_id": "BIBREF59"
},
{
"start": 482,
"end": 503,
"text": "(Hinton et al., 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DistilBERT",
"sec_num": "3.5"
},
{
"text": "To test the overall impact of fine-tuning in the AES domain, we use five datasets from the ASAP competition, jointly hosted by the Hewlett Foundation and Kaggle.com (Shermis, 2014) . This set of essay prompts was the subject of intense pub-lic attention and scrutiny in 2012 and its public release has shaped the discourse on AES ever since. For our experiments, we use the original, deanonymized data from Shermis and Hamner (2012); an anonymized version of these datasets is available online 2 . In all cases, human inter-rater reliability (IRR) is an approximate upper bound on performance, but reliability above human IRR is possible, as all models are trained on resolved scores that represent two scores plus a resolution process for disagreements between annotators.",
"cite_spans": [
{
"start": 165,
"end": 180,
"text": "(Shermis, 2014)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We focus our analysis on the five datasets that most closely resemble standard AES rubrics, discarding three datasets -prompts #1, 7, and 8with a scale of 10 or more possible points. Results on these datasets are not representative of overall performance and can skew reported results due to rubric idiosyncracies, making comparison to other published work impossible (see for example (Alikaniotis et al., 2016) , which groups 60-point and 4-point rubrics into a single dataset and therefore produced correlations that cannot be aligned to results from any other published work). Prompts 2-6 are scored on smaller rubric scales with 4-6 points, and are thus generalizable to more AES contexts. Note that nevertheless, each dataset has its own idiosyncracies; for instance, essays in dataset #5 were written by younger students in 7th and 8th grade, while dataset #4 contains writing from high school seniors; datasets #3 and 4 were responses to specific texts while others were open-ended; and scores in dataset #2 was actually scored on two separate traits, the second of which is often discarded in followup work (as it is here). Our work here does not specifically isolate effects of these differences that would lead to discrepancies in performance or in modeling behavior.",
"cite_spans": [
{
"start": 385,
"end": 411,
"text": "(Alikaniotis et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For measuring reliability of automated assessments, we use a variant of Cohen's \u03ba, with quadratic weights for \"near-miss\" predictions on an ordinal scale (QWK). This metric is standard in the AES community (Williamson et al., 2012) . High-stakes testing organizations differ on exact cutoffs for acceptable performance, but threshold values between 0.6 and 0.8 QWK are typically used as a floor for testing purposes; human reliability below this threshold is generally not fit for summative student assessment.",
"cite_spans": [
{
"start": 206,
"end": 231,
"text": "(Williamson et al., 2012)",
"ref_id": "BIBREF72"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Baselines",
"sec_num": "4.1"
},
{
"text": "In addition to measuring reliability, we also measure training and prediction time, in seconds. As this work seeks to evaluate the practical tradeoffs of the move to deep neural methods, this is an important secondary metric. For all experiments, training was performed on Google Colab Pro cloud servers with 32 GB of RAM and an NVideo Tesla P100 GPGPU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Baselines",
"sec_num": "4.1"
},
{
"text": "We compare the results of BERT against several previously published benchmarks and results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Baselines",
"sec_num": "4.1"
},
{
"text": "\u2022 Human IRR as initially reported in the Hewlett Foundation study (Shermis, 2014).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Baselines",
"sec_num": "4.1"
},
{
"text": "\u2022 Industry best performance, as reported by eight commercial vendors and one opensource research team in the initial release of the ASAP study. (Shermis, 2014).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Baselines",
"sec_num": "4.1"
},
{
"text": "\u2022 An early deep learning approach using a combination CNN+LSTM architecture that outperformed most reported results at that time (Taghipour and Ng, 2016) .",
"cite_spans": [
{
"start": 129,
"end": 153,
"text": "(Taghipour and Ng, 2016)",
"ref_id": "BIBREF66"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Baselines",
"sec_num": "4.1"
},
{
"text": "\u2022 Two recent results using traditional nonneural models: Woods et al. 2017, which uses n-gram features in an ordinal logistic regression, and Cozma et al. (2018) , which uses a mix of string kernels and word2vec embeddings in a support vector regression.",
"cite_spans": [
{
"start": 142,
"end": 161,
"text": "Cozma et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Baselines",
"sec_num": "4.1"
},
{
"text": "\u2022 Rodriguez et al. (2019) , the one previouslypublished work that attempts AES with a variety of pretrained neural models, including BERT and the similar XLNet , with numerous alternate configurations and training methods. We report their result with a baseline BERT fine-tuning process, as well as their best-tuned model after extensive optimization.",
"cite_spans": [
{
"start": 2,
"end": 25,
"text": "Rodriguez et al. (2019)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Baselines",
"sec_num": "4.1"
},
{
"text": "Following past publications, we train a separate model on each dataset, and evaluate all datasetspecific models using 5-fold cross-validation. Each of the five datasets contains approximately 1,800 essays, resulting in folds of 360 essays each. Additionally, for measuring loss when finetuning BERT, we hold out an additional 20% of each training fold as a validation set, meaning that each fold has approximately 1,150 essays used for training and 300 essays used for validation. We report mean QWK across the five folds. For measurement of training and prediction time, we report the sum of training time across all five folds and all datasets. For slow-running feature extraction, like N -gram part-of-speech features and word embedding-based features, we tag each sentence in the dataset only once and cache the results, rather than re-tagging each sentence on each fold. Finally, for models where distinguishing extraction from training time is meaningful, we present those times separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.2"
},
{
"text": "Our primary results are presented in Table 1 . We find, broadly, that all approaches to machine learning replicate human-level IRR as measured by QWK. Nearly eight years after the publication of the original study, no published results have exceeded vendor performance on three of the five prompt datasets; in all cases, a naive N -gram approach underperforms the state-of-the-art in industry and academia by 0.03-0.06 QWK.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 44,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Accuracy Evaluation",
"sec_num": "5.1"
},
{
"text": "Of particular note is the low performance of GloVe embeddings relative to either neural or Ngram representations. This is surprising: while word embeddings are less popular now than deep neural methods, they still perform well on a wide range of tasks (Baroni et al., 2014) . Few publications have noted this negative result for GloVe in the AES domain; only Dong et al. (2017) uses GloVe as the primary representation of ASAP texts in an LSTM model, reporting lower QWK results than any baseline we presented here. One simple explanation for this may be that individual keywords matter a great deal for model performance. It is well established that vocabularybased approaches are effective in AES tasks (Higgins et al., 2014) and the lack of access to specific word-based features may hinder semantic vector representation. Indeed, only one competitive recent paper on AES uses non-contextual word vectors: Cozma et al. (2018) . In this implementation, they do use word2vec, but rather than use word embeddings directly they first cluster words into a set of 500 \"embedding clusters.\" Words that appear in texts are then counted in the feature vector as the centroid of that cluster -in effect, creating a 500-dimensional bag-of-words model.",
"cite_spans": [
{
"start": 252,
"end": 273,
"text": "(Baroni et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 359,
"end": 377,
"text": "Dong et al. (2017)",
"ref_id": "BIBREF12"
},
{
"start": 705,
"end": 727,
"text": "(Higgins et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 909,
"end": 928,
"text": "Cozma et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy Evaluation",
"sec_num": "5.1"
},
{
"text": "Our results would suggest that fine-tuning with BERT also reaches approximately the same level Rodriguez et al. (2019) demonstrate that it is, in fact, possible to improve the performance of neural models to more closely approach (but not exceed) the stateof-the-art using neural models. Sophisticated approaches like gradual unfreezing, discriminative fine-tuning, or greater parameterization through newer deep learning models in their work consistently produces improvements of 0.01-0.02 QWK compared to the default BERT implementation. But this result emphasizes our concern: we do not claim our results are the best that could be achieved with BERT fine-tuning. We are, in fact, confident that they can be improved through optimization. What the results demonstrate instead is that the ceiling of results for AES tasks lessens the value of that intensive optimization effort.",
"cite_spans": [
{
"start": 95,
"end": 118,
"text": "Rodriguez et al. (2019)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy Evaluation",
"sec_num": "5.1"
},
{
"text": "Our secondary evaluation of models is based on training time and resource usage; those results are reported in Table 2 . Here, we see that deep learning approaches on GPU-enabled cloud compute produce an approximately 30-100 fold increase in end-to-end training time compared to a naive approach. In fact, this understates the gap, as approximately 75% of feature extraction and model training time in the naive approach is due to part-of- speech tagging rather than learning. Using BERT features as inputs to a linear classifier is an interesting compromise option, producing slightly lower performance on these datasets but with only a 2x slowdown at training time, all in feature extraction, and potentially retaining some of the semantic knowledge of the full BERT model. Further investigation should test whether additional features for intermediate layers, as explored in Peters et al. 2019, is merited for AES. We can look at this gap in training runtime more closely in Figure 2 . Essays in the prompt 2 dataset are longer persuasive essays and are on average 378 words long, while datasets 3-6 correspond to shorter, source-based content knowledge prompts and are on average 98-152 words long. The need for truncation in dataset #2 for BERT, but not for other approaches, may explain the underperformance of the model in that dataset. Additionally, differences across datasets highlight two key differences for fine-tuning a BERT model:",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 978,
"end": 986,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Runtime Evaluation",
"sec_num": "5.2"
},
{
"text": "\u2022 Training time increases linearly with number of epochs and with average document length. As seen in Figure 2 , this leads to a longer training for the longer essays of dataset 2, nearly as long as the other datasets combined.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 110,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Runtime Evaluation",
"sec_num": "5.2"
},
{
"text": "\u2022 Performance converges on human inter-rater reliability more quickly for short contentbased prompts, and performance begins to decrease due to overfitting in as few as 4 epochs. By comparison, in the longer, persuasive arguments of dataset 2, very small performance gains on held-out data continued even at the end of our experiments. improvements both in fine-tuning and at prediction time, relative to the baseline BERT model: training time was reduced by 33% and prediction time was reduced by 44%. This still represents at least a 20x increase in runtime relative to N -gram baselines both for training and prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Evaluation",
"sec_num": "5.2"
},
{
"text": "For scoring essays with reliably scored, promptspecific training sets, both classical and neural approaches produce similar reliability, at approximately identical levels to human inter-rater reliability. There is a substantial increase in technical overhead required to implement Transformers and fine-tune them to reach this performance, with minimal gain compared to baselines. The policy lesson for NLP researchers is that using deep learning for scoring alone is unlikely to be justifiable, given the slowdowns at both training and inference time, and the additional hardware requirements. For scoring, at least, Transformer architectures are a hammer in search of a nail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "But it's hardly the case that automated writing evaluation is limited to scoring. In this section we cover major open topics for technical researchers in AES to explore, focusing on areas where neural models have proven strengths above baselines in other domains. We prioritize three major areas: domain transfer, style, and fairness. In each we cite specific past work that indicates a plausible path forward for research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "A major challenge in AES is the inability of prompt-specific models to generalize to new essay topics (Attali et al., 2010; Lee, 2016) . Collection of new prompt-specific training sets, with reliable scores, continues to be one of the major stumbling blocks to expansion of AES systems in curricula (Woods et al., 2017) . Relatively few researchers have made progress on generic essay scoring: Phandi et al. (2015) introduces a Bayesian regression approach that extracts N -gram features then capitalizes on correlated features across prompts. Jin et al. (2018) shows promising prompt-independent results using an LSTM architecture with surface and part-of-speech N -gram inputs, underperforming prompt-specific models by relatively small margins across all ASAP datasets. But in implementations, much of the work of practitioners is based on workarounds for prompt-specific models; Wilson et al. 2019, for instance, describes psychometric techniques for measuring generic writing ability across a small sample of known prompts.",
"cite_spans": [
{
"start": 102,
"end": 123,
"text": "(Attali et al., 2010;",
"ref_id": "BIBREF2"
},
{
"start": 124,
"end": 134,
"text": "Lee, 2016)",
"ref_id": "BIBREF32"
},
{
"start": 299,
"end": 319,
"text": "(Woods et al., 2017)",
"ref_id": "BIBREF75"
},
{
"start": 394,
"end": 414,
"text": "Phandi et al. (2015)",
"ref_id": "BIBREF51"
},
{
"start": 544,
"end": 561,
"text": "Jin et al. (2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Transfer",
"sec_num": "6.1"
},
{
"text": "While Transformers are sensitive to the data they were pretrained on, they are well-suited to transfer tasks in mostly unseen domains, as evidenced by part-of-speech tagging for historical texts (Han and Eisenstein, 2019) , sentiment classification on out-of-domain reviews (Myagmar et al., 2019) , and question answering in new contexts (Houlsby et al., 2019) . This last result is promising for content-based short essay prompts, in particular. Our field's open challenge in scoring is to train AES models that can meaningfully evaluate short response texts for correctness based on world knowledge and domain transfer, rather than memorizing the vocabulary of correct, indomain answers. Promising early results show that relevant world knowledge is already embedded in BERT's pretrained model (Petroni et al., 2019) . This means that BERT opens up a potentially tractable path to success that was simply not possible with N -gram models.",
"cite_spans": [
{
"start": 195,
"end": 221,
"text": "(Han and Eisenstein, 2019)",
"ref_id": "BIBREF19"
},
{
"start": 274,
"end": 296,
"text": "(Myagmar et al., 2019)",
"ref_id": "BIBREF40"
},
{
"start": 338,
"end": 360,
"text": "(Houlsby et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 796,
"end": 818,
"text": "(Petroni et al., 2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Transfer",
"sec_num": "6.1"
},
{
"text": "Skepticism toward AES in the classroom comes from rhetoric and composition scholars, who ex-press concerns about its role in writing pedagogy (NCTE, 2013; Warner, 2018) . Indeed, the relatively \"solved\" nature of summative scoring that we highlight here is of particular concern to these experts, noting the high correlation between scores and features like word count (Perelman, 2014) .",
"cite_spans": [
{
"start": 142,
"end": 154,
"text": "(NCTE, 2013;",
"ref_id": "BIBREF42"
},
{
"start": 155,
"end": 168,
"text": "Warner, 2018)",
"ref_id": "BIBREF71"
},
{
"start": 369,
"end": 385,
"text": "(Perelman, 2014)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Style and Voice",
"sec_num": "6.2"
},
{
"text": "Modern classroom use of AES beyond highstakes scoring, like Project Essay Grade (Wilson and Roscoe, 2019) or Turnitin Revision Assistant (Mayfield and Butler, 2018) , makes claims of supporting student agency and growth; here, adapting to writer individuality is a major current gap. Dixon-Rom\u00e1n et al. (2019) raises a host of questions about these topics specifically in the context of AES, asking how algorithmic intervention can produce strong writers rather than merely good essays: \"revision, as adjudicated by the platform, is [...] a re-direction toward the predetermined shape of the ideal written form [...] a puzzle-doer recursively consulting the image on the puzzlebox, not that of author returning to their words to make them more lucid, descriptive, or forceful.\"",
"cite_spans": [
{
"start": 80,
"end": 105,
"text": "(Wilson and Roscoe, 2019)",
"ref_id": "BIBREF74"
},
{
"start": 137,
"end": 164,
"text": "(Mayfield and Butler, 2018)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Style and Voice",
"sec_num": "6.2"
},
{
"text": "This critique is valid: research on machine translation, for instance, has shown that writer style is not preserved across languages (Rabinovich et al., 2017) . There is uncharted territory for AES to adapt to individual writer styles and give feedback based on individual writing rather than prompt-specific exemplars. Natural language understanding researchers now argue that \"...style is formed by a complex combination of different stylistic factors\" (Kang and Hovy, 2019) ; Stylespecific natural language generation has shown promise in other domains (Hu et al., 2017; Prabhumoye et al., 2018) and has been extended not just to individual preferences but also to overlapping identities based on attitudes like sentiment and personal attributes like gender (Subramanian et al.) . Early work suggests that style-specific models do see major improvements when shifting to high-dimensionality Transformer architectures (Keskar et al., 2019) . This topic bridges an important gap: for assessment, research has shown that \"authorial voice\" has measurable outcomes on writing impact (Matsuda and Tardy, 2007) , while individual expression is central to decades of pedagogy (Elbow, 1987) . Moving the field toward individual expression and away from prompt-specific datasets may be a path to lending legitimacy to AES, and Transformers may be the technical leap necessary to make these tasks work.",
"cite_spans": [
{
"start": 133,
"end": 158,
"text": "(Rabinovich et al., 2017)",
"ref_id": "BIBREF54"
},
{
"start": 455,
"end": 476,
"text": "(Kang and Hovy, 2019)",
"ref_id": "BIBREF29"
},
{
"start": 556,
"end": 573,
"text": "(Hu et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 574,
"end": 598,
"text": "Prabhumoye et al., 2018)",
"ref_id": "BIBREF53"
},
{
"start": 761,
"end": 781,
"text": "(Subramanian et al.)",
"ref_id": null
},
{
"start": 920,
"end": 941,
"text": "(Keskar et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 1081,
"end": 1106,
"text": "(Matsuda and Tardy, 2007)",
"ref_id": "BIBREF36"
},
{
"start": 1171,
"end": 1184,
"text": "(Elbow, 1987)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Style and Voice",
"sec_num": "6.2"
},
{
"text": "Years ago, researchers suggested that demographic bias is worth checking in AES systems (Williamson et al., 2012) . But years later, the field has primarily reported fairness experiments on simulated data, and shared toolkits for measuring bias, rather than results on real-world AES implementations or high-stakes data (Madnani et al., 2017; Loukina et al., 2019) .",
"cite_spans": [
{
"start": 88,
"end": 113,
"text": "(Williamson et al., 2012)",
"ref_id": "BIBREF72"
},
{
"start": 320,
"end": 342,
"text": "(Madnani et al., 2017;",
"ref_id": "BIBREF35"
},
{
"start": 343,
"end": 364,
"text": "Loukina et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fairness",
"sec_num": "6.3"
},
{
"text": "Prompted by social scientists (Noble, 2018), NLP researchers have seen a renaissance of fairness research based on the flaws in default implementations of Transformers (Bolukbasi et al., 2016; Zhao et al., 2017 Zhao et al., , 2018 . These works typicallly seek to reduce the amplification of bias in pretrained models, starting with easy-to-measure proof that demographic bias can be \"removed\" from word embedding spaces. But iterating on inputs to algorithmic classifiers -precisely the intended use case of formative eeedback for writers! -can reduce the efficacy of \" de-biasing\" (Liu et al., 2018; Dwork and Ilvento, 2019) . More recent research has shown that bias may simply be masked by these approaches, rather than resolved (Gonen and Goldberg, 2019) .",
"cite_spans": [
{
"start": 168,
"end": 192,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 193,
"end": 210,
"text": "Zhao et al., 2017",
"ref_id": "BIBREF78"
},
{
"start": 211,
"end": 230,
"text": "Zhao et al., , 2018",
"ref_id": "BIBREF79"
},
{
"start": 571,
"end": 601,
"text": "de-biasing\" (Liu et al., 2018;",
"ref_id": null
},
{
"start": 602,
"end": 626,
"text": "Dwork and Ilvento, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 733,
"end": 759,
"text": "(Gonen and Goldberg, 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fairness",
"sec_num": "6.3"
},
{
"text": "What these questions offer, though, is a wellspring of new and innovative technical research. Developers of learning analytics software, including AES, are currently encouraged to focus on scalable experimental evidence of efficacy for learning outcomes (Saxberg, 2017) , rather than focus on specific racial or gender bias, or other equity outcomes that are more difficult to achieve through engineering. But Transformer architectures are nuanced enough to capture immense world knowledge, producing a rapid increase in explainability in NLP (Rogers et al., 2020) .",
"cite_spans": [
{
"start": 254,
"end": 269,
"text": "(Saxberg, 2017)",
"ref_id": "BIBREF60"
},
{
"start": 543,
"end": 564,
"text": "(Rogers et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fairness",
"sec_num": "6.3"
},
{
"text": "Meanwhile, in the field of learning analytics, a burgeoning new field of fairness studies are learning how to investigate these issues in algorithmic educational systems (Mayfield et al., 2019; Holstein and Doroudi, 2019) . Outside of technology applications but in writing assessment more broadly, fairness is also a rich topic with a history of literature to learn from (Poe and Elliot, 2019) . Researchers at the intersection of both these fields have an enormous open opportunity to better understand AES in the context fairness, using the latest tools not just to build reliable scoring but to advance social change.",
"cite_spans": [
{
"start": 170,
"end": 193,
"text": "(Mayfield et al., 2019;",
"ref_id": "BIBREF38"
},
{
"start": 194,
"end": 221,
"text": "Holstein and Doroudi, 2019)",
"ref_id": "BIBREF22"
},
{
"start": 372,
"end": 394,
"text": "(Poe and Elliot, 2019)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fairness",
"sec_num": "6.3"
},
{
"text": "https://www.kaggle.com/c/asap-aes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic text scoring using neural networks",
"authors": [
{
"first": "Dimitrios",
"middle": [],
"last": "Alikaniotis",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "715--725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitrios Alikaniotis, Helen Yannakoudakis, and Marek Rei. 2016. Automatic text scoring using neu- ral networks. In Proceedings of the Association for Computational Linguistics, pages 715-725.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Validity and reliability of automated essay scoring. Handbook of automated essay evaluation: Current applications and new directions",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Attali",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yigal Attali. 2013. Validity and reliability of auto- mated essay scoring. Handbook of automated es- say evaluation: Current applications and new direc- tions, page 181.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Performance of a generic approach in automated essay scoring",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Attali",
"suffix": ""
},
{
"first": "Brent",
"middle": [],
"last": "Bridgeman",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Trapani",
"suffix": ""
}
],
"year": 2010,
"venue": "The Journal of Technology, Learning and Assessment",
"volume": "10",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yigal Attali, Brent Bridgeman, and Catherine Trapani. 2010. Performance of a generic approach in auto- mated essay scoring. The Journal of Technology, Learning and Assessment, 10(3).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automated essay scoring with e-rater R v. 2.0",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Attali",
"suffix": ""
},
{
"first": "Jill",
"middle": [],
"last": "Burstein",
"suffix": ""
}
],
"year": 2004,
"venue": "ETS Research Report Series",
"volume": "",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yigal Attali and Jill Burstein. 2004. Automated essay scoring with e-rater R v. 2.0. ETS Research Report Series, (2).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "238--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the Association for Computational Linguistics, pages 238-247.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Adam",
"middle": [
"T"
],
"last": "Saligrama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "4349--4357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4349-4357. Curran Associates, Inc.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Building e-rater R scoring models using machine learning methods",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Isaac",
"middle": [
"I"
],
"last": "Fife",
"suffix": ""
},
{
"first": "Andr\u00e9 A",
"middle": [],
"last": "Bejar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rupp",
"suffix": ""
}
],
"year": 2016,
"venue": "ETS Research Report Series",
"volume": "2016",
"issue": "1",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Chen, James H Fife, Isaac I Bejar, and Andr\u00e9 A Rupp. 2016. Building e-rater R scoring models us- ing machine learning methods. ETS Research Re- port Series, 2016(1):1-12.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automated essay scoring with string kernels and word embeddings",
"authors": [
{
"first": "M\u0203d\u0203lina",
"middle": [],
"last": "Cozma",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Radu Tudor",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u0203d\u0203lina Cozma, Andrei Butnaru, and Radu Tudor Ionescu. 2018. Automated essay scoring with string kernels and word embeddings. In Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Transformer-xl: Attentive language models beyond a fixed-length context",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive lan- guage models beyond a fixed-length context. In Pro- ceedings of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the NAACL HLT Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the NAACL HLT Conference.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The racializing forces of/in ai educational technologies. Learning, Media & Technology Special Issue on AI and Education: Critical Perspectives and Alternative Futures",
"authors": [
{
"first": "Ezekiel",
"middle": [],
"last": "Dixon-Rom\u00e1n",
"suffix": ""
},
{
"first": "T",
"middle": [
"Philip"
],
"last": "Nichols",
"suffix": ""
},
{
"first": "Ama",
"middle": [],
"last": "Nyame-Mensah",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ezekiel Dixon-Rom\u00e1n, T. Philip Nichols, and Ama Nyame-Mensah. 2019. The racializing forces of/in ai educational technologies. Learning, Media & Technology Special Issue on AI and Education: Crit- ical Perspectives and Alternative Futures.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Attentionbased recurrent convolutional neural network for automatic essay scoring",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "153--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Dong, Yue Zhang, and Jie Yang. 2017. Attention- based recurrent convolutional neural network for au- tomatic essay scoring. In Proceedings of the Confer- ence on Computational Natural Language Learning, pages 153-162.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Fairness under composition",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Dwork",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Ilvento",
"suffix": ""
}
],
"year": 2019,
"venue": "Innovations in Theoretical Computer Science Conference. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Dwork and Christina Ilvento. 2019. Fairness under composition. In Innovations in Theoretical Computer Science Conference. Schloss Dagstuhl- Leibniz-Zentrum fuer Informatik.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Closing my eyes as i speak: An argument for ignoring audience",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Elbow",
"suffix": ""
}
],
"year": 1987,
"venue": "College English",
"volume": "49",
"issue": "1",
"pages": "50--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Elbow. 1987. Closing my eyes as i speak: An argument for ignoring audience. College English, 49(1):50-69.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Liblinear: A library for large linear classification",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Rong-En Fan",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiang-Rui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "1871--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. Journal of Ma- chine Learning Research, 9(Aug):1871-1874.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Towards enabling feedback on rhetorical structure with neural sequence models",
"authors": [
{
"first": "James",
"middle": [],
"last": "Fiacco",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Cotos",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Ros\u00e9",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Learning Analytics & Knowledge",
"volume": "",
"issue": "",
"pages": "310--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Fiacco, Elena Cotos, and Carolyn Ros\u00e9. 2019. Towards enabling feedback on rhetorical structure with neural sequence models. In Proceedings of the International Conference on Learning Analytics & Knowledge, pages 310-319. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Supporting content-based feedback in on-line writing evaluation with lsa",
"authors": [
{
"first": "W",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Foltz",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Gilliam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kendall",
"suffix": ""
}
],
"year": 2000,
"venue": "Interactive Learning Environments",
"volume": "8",
"issue": "2",
"pages": "111--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter W Foltz, Sara Gilliam, and Scott Kendall. 2000. Supporting content-based feedback in on-line writ- ing evaluation with lsa. Interactive Learning Envi- ronments, 8(2):111-127.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gen- der biases in word embeddings but do not remove them. Proceedings of the Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unsupervised domain adaptation of contextualized embeddings: A case study in early modern english",
"authors": [
{
"first": "Xiaochuang",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.02817"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaochuang Han and Jacob Eisenstein. 2019. Unsuper- vised domain adaptation of contextualized embed- dings: A case study in early modern english. arXiv preprint arXiv:1904.02817.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Is getting the right answer just about choosing the right words? the role of syntacticallyinformed features in short answer scoring",
"authors": [
{
"first": "Derrick",
"middle": [],
"last": "Higgins",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brew",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Ramon",
"middle": [],
"last": "Ziai",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Flor",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Blanchard",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1403.0801"
]
},
"num": null,
"urls": [],
"raw_text": "Derrick Higgins, Chris Brew, Michael Heilman, Ra- mon Ziai, Lei Chen, Aoife Cahill, Michael Flor, Nitin Madnani, Joel Tetreault, Daniel Blanchard, et al. 2014. Is getting the right answer just about choosing the right words? the role of syntactically- informed features in short answer scoring. arXiv preprint arXiv:1403.0801.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "stat",
"volume": "1050",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. stat, 1050:9.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Fairness and equity in learning analytics systems (fairlak). in",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Holstein",
"suffix": ""
},
{
"first": "Shayan",
"middle": [],
"last": "Doroudi",
"suffix": ""
}
],
"year": 2019,
"venue": "Companion Proceedings of the International Learning Analytics & Knowledge Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Holstein and Shayan Doroudi. 2019. Fairness and equity in learning analytics systems (fairlak). in. In Companion Proceedings of the International Learning Analytics & Knowledge Conference (LAK 2019).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embed- dings, convolutional neural networks and incremen- tal parsing.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Parameter-efficient transfer learning for nlp",
"authors": [
{
"first": "Neil",
"middle": [],
"last": "Houlsby",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Giurgiu",
"suffix": ""
},
{
"first": "Stanislaw",
"middle": [],
"last": "Jastrzebski",
"suffix": ""
},
{
"first": "Bruna",
"middle": [],
"last": "Morrone",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "De Laroussilhe",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Gesmundo",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Attariyan",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gelly",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2790--2799",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In Proceedings of the International Conference on Machine Learning, pages 2790-2799.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Toward controlled generation of text",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1587--1596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward con- trolled generation of text. In Proceedings of the In- ternational Conference on Machine Learning, pages 1587-1596.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "What does bert learn about the structure of language",
"authors": [
{
"first": "Ganesh",
"middle": [],
"last": "Jawahar",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3651--3657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does bert learn about the structure of language? In Proceedings of the Association for Computational Linguistics, pages 3651-3657.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Tdnn: a two-stage deep neural network for promptindependent automated essay scoring",
"authors": [
{
"first": "Cancan",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Hui",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1088--1097",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cancan Jin, Ben He, Kai Hui, and Le Sun. 2018. Tdnn: a two-stage deep neural network for prompt- independent automated essay scoring. In Proceed- ings of the Association for Computational Linguis- tics, pages 1088-1097.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Speech and language processing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Jurafsky and James H Martin. 2014. Speech and language processing, volume 3. Pearson London.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "xslue: A benchmark and analysis platform for cross-style language understanding and evaluation",
"authors": [
{
"first": "Dongyeop",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03663"
]
},
"num": null,
"urls": [],
"raw_text": "Dongyeop Kang and Eduard Hovy. 2019. xslue: A benchmark and analysis platform for cross-style lan- guage understanding and evaluation. arXiv preprint arXiv:1911.03663.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Ctrl: A conditional transformer language model for controllable generation",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nitish Shirish Keskar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lav",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Varshney",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.05858"
]
},
"num": null,
"urls": [],
"raw_text": "Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for control- lable generation. arXiv preprint arXiv:1909.05858.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Revealing the dark secrets of bert",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "2465--2475",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark se- crets of bert. In Proceedings of Empirical Methods in Natural Language Processing, volume 1, pages 2465-2475.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Investigating the feasibility of generic scoring models of e-rater for toefl ibt independent writing tasks",
"authors": [
{
"first": "Yong-Won",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong-Won Lee. 2016. Investigating the feasibility of generic scoring models of e-rater for toefl ibt inde- pendent writing tasks. English Language Teaching, 71.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Delayed impact of fair machine learning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Lydia",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Esther",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Rolf",
"suffix": ""
},
{
"first": "Moritz",
"middle": [],
"last": "Simchowitz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hardt",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lydia T Liu, Sarah Dean, Esther Rolf, Max Sim- chowitz, and Moritz Hardt. 2018. Delayed impact of fair machine learning. In Proceedings of the In- ternational Conference on Machine Learning.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The many dimensions of algorithmic fairness in educational applications",
"authors": [
{
"first": "Anastassia",
"middle": [],
"last": "Loukina",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Zechner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the ACL Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anastassia Loukina, Nitin Madnani, and Klaus Zech- ner. 2019. The many dimensions of algorithmic fair- ness in educational applications. In Proceedings of the ACL Workshop on Innovative Use of NLP for Building Educational Applications.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Building better open-source tools to support fairness in automated scoring",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Anastassia",
"middle": [],
"last": "Loukina",
"suffix": ""
},
{
"first": "Alina",
"middle": [],
"last": "Von Davier",
"suffix": ""
},
{
"first": "Jill",
"middle": [],
"last": "Burstein",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the ACL Workshop on Ethics in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "41--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani, Anastassia Loukina, Alina von Davier, Jill Burstein, and Aoife Cahill. 2017. Building bet- ter open-source tools to support fairness in auto- mated scoring. In Proceedings of the ACL Workshop on Ethics in Natural Language Processing, pages 41-52.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Voice in academic writing: The rhetorical construction of author identity in blind manuscript review",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Kei Matsuda",
"suffix": ""
},
{
"first": "Christine",
"middle": [
"M"
],
"last": "Tardy",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "26",
"issue": "",
"pages": "235--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Kei Matsuda and Christine M Tardy. 2007. Voice in academic writing: The rhetorical construction of author identity in blind manuscript review. English for Specific Purposes, 26(2):235-249.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Districtwide implementations outperform isolated use of automated feedback in high school writing",
"authors": [
{
"first": "Elijah",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Butler",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference of the Learning Sciences (Industry and Commercial Track)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elijah Mayfield and Stephanie Butler. 2018. Dis- trictwide implementations outperform isolated use of automated feedback in high school writing. In Proceedings of the International Conference of the Learning Sciences (Industry and Commercial Track).",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Equity beyond bias in language technologies for education",
"authors": [
{
"first": "Elijah",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Madaio",
"suffix": ""
},
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gerritsen",
"suffix": ""
},
{
"first": "Brittany",
"middle": [],
"last": "Mclaughlin",
"suffix": ""
},
{
"first": "Ezekiel",
"middle": [],
"last": "Dixon-Rom\u00e1n",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elijah Mayfield, Michael Madaio, Shrimai Prab- humoye, David Gerritsen, Brittany McLaughlin, Ezekiel Dixon-Rom\u00e1n, and Alan W Black. 2019. Equity beyond bias in language technologies for ed- ucation. In Proceedings of ACL Workshop on Inno- vative Use of NLP for Building Educational Appli- cations.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Essay assessment with latent semantic analysis",
"authors": [
{
"first": "Tristan",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Educational Computing Research",
"volume": "29",
"issue": "4",
"pages": "495--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tristan Miller. 2003. Essay assessment with latent se- mantic analysis. Journal of Educational Computing Research, 29(4):495-512.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Transferable high-level representations of bert for cross-domain sentiment classification",
"authors": [
{
"first": "Batsergelen",
"middle": [],
"last": "Myagmar",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shigetomo",
"middle": [],
"last": "Kimura",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings on the International Conference on Artificial Intelligence (ICAI)",
"volume": "",
"issue": "",
"pages": "135--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Batsergelen Myagmar, Jie Li, and Shigetomo Kimura. 2019. Transferable high-level representations of bert for cross-domain sentiment classification. In Proceedings on the International Conference on Ar- tificial Intelligence (ICAI), pages 135-141.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Automated essay scoring with discourse-aware neural models",
"authors": [
{
"first": "Farah",
"middle": [],
"last": "Nadeem",
"suffix": ""
},
{
"first": "Huy",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the ACL Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "484--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farah Nadeem, Huy Nguyen, Yang Liu, and Mari Ostendorf. 2019. Automated essay scoring with discourse-aware neural models. In Proceedings of the ACL Workshop on Innovative Use of NLP for Building Educational Applications, pages 484-493.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Position statement on machine scoring. National Council of Teachers of English",
"authors": [
{
"first": "",
"middle": [],
"last": "Ncte",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NCTE. 2013. Position statement on machine scoring. National Council of Teachers of English. Accessed 2019-09-24.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "The conll-2014 shared task on grammatical error correction",
"authors": [
{
"first": "",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Mei",
"middle": [],
"last": "Siew",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"Hendy"
],
"last": "Hadiwinoto",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Susanto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christo- pher Bryant. 2014. The conll-2014 shared task on grammatical error correction. In Proceedings of the Conference on Computational Natural Lan- guage Learning: Shared Task, pages 1-14.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Argument mining for improving the automated scoring of persuasive essays",
"authors": [
{
"first": "V",
"middle": [],
"last": "Huy",
"suffix": ""
},
{
"first": "Diane",
"middle": [
"J"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huy V Nguyen and Diane J Litman. 2018. Argument mining for improving the automated scoring of per- suasive essays. In Proceedings of the AAAI Confer- ence on Artificial Intelligence.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Algorithms of oppression: How search engines reinforce racism",
"authors": [
{
"first": "Noble",
"middle": [],
"last": "Safiya Umoja",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Safiya Umoja Noble. 2018. Algorithms of oppression: How search engines reinforce racism. NYU Press.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12(Oct):2825-2830.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of Empirical Meth- ods in Natural Language Processing, pages 1532- 1543.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "When \"the state of the art",
"authors": [
{
"first": "Les",
"middle": [],
"last": "Perelman",
"suffix": ""
}
],
"year": 2014,
"venue": "is counting words. Assessing Writing",
"volume": "21",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Les Perelman. 2014. When \"the state of the art\" is counting words. Assessing Writing, 21:104-111.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "To tune or not to tune? adapting pretrained representations to diverse tasks",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "7--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Sebastian Ruder, and Noah A Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7-14.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Language models as knowledge bases?",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2463--2473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of Empirical Methods in Natural Language Processing, pages 2463-2473.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Flexible domain adaptation for automated essay scoring using correlated linear regression",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Phandi",
"suffix": ""
},
{
"first": "Kian",
"middle": [],
"last": "Ming",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "431--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Phandi, Kian Ming A Chai, and Hwee Tou Ng. 2015. Flexible domain adaptation for automated es- say scoring using correlated linear regression. In Proceedings of Empirical Methods in Natural Lan- guage Processing, pages 431-439.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Evidence of fairness: Twenty-five years of research in assessing writing. Assessing Writing",
"authors": [
{
"first": "Mya",
"middle": [],
"last": "Poe",
"suffix": ""
},
{
"first": "Norbert",
"middle": [],
"last": "Elliot",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "42",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mya Poe and Norbert Elliot. 2019. Evidence of fair- ness: Twenty-five years of research in assessing writing. Assessing Writing, 42:100418.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Style transfer through back-translation",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Shrimai Prabhumoye",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "866--876",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through back-translation. In Proceedings of the Association for Computational Linguistics, pages 866-876.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Personalized machine translation: Preserving original author traits",
"authors": [
{
"first": "Ella",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Raj",
"middle": [
"Nath"
],
"last": "Patel",
"suffix": ""
},
{
"first": "Shachar",
"middle": [],
"last": "Mirkin",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1074--1084",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ella Rabinovich, Raj Nath Patel, Shachar Mirkin, Lu- cia Specia, and Shuly Wintner. 2017. Personal- ized machine translation: Preserving original au- thor traits. In Proceedings of the European Chap- ter of the Association for Computational Linguistics, pages 1074-1084.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "A review of rubric use in higher education. Assessment & evaluation in higher education",
"authors": [
{
"first": "Malini",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Andrade",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "35",
"issue": "",
"pages": "435--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y Malini Reddy and Heidi Andrade. 2010. A review of rubric use in higher education. Assessment & evalu- ation in higher education, 35(4):435-448.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "How to account for mispellings: Quantifying the benefit of character representations in neural content scoring models",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Riordan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Flor",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Pugh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the ACL Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "116--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Riordan, Michael Flor, and Robert Pugh. 2019. How to account for mispellings: Quantifying the benefit of character representations in neural content scoring models. In Proceedings of the ACL Work- shop on Innovative Use of NLP for Building Educa- tional Applications, pages 116-126.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Language models and automated essay scoring",
"authors": [
{
"first": "Pedro",
"middle": [
"Uria"
],
"last": "Rodriguez",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Jafari",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Ormerod",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.09482"
]
},
"num": null,
"urls": [],
"raw_text": "Pedro Uria Rodriguez, Amir Jafari, and Christopher M Ormerod. 2019. Language models and automated essay scoring. arXiv preprint arXiv:1909.09482.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "2020. A primer in bertology: What we know about how bert works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.12327"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works. arXiv preprint arXiv:2002.12327.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Learning engineering: the art of applying learning science at scale",
"authors": [
{
"first": "Bror",
"middle": [],
"last": "Saxberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the ACM Conference on Learning@ Scale",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bror Saxberg. 2017. Learning engineering: the art of applying learning science at scale. In Proceedings of the ACM Conference on Learning@ Scale. ACM.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "State-of-the-art automated essay scoring: Competition, results, and future directions from a united states demonstration",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shermis",
"suffix": ""
}
],
"year": 2014,
"venue": "Assessing Writing",
"volume": "20",
"issue": "",
"pages": "53--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark D Shermis. 2014. State-of-the-art automated es- say scoring: Competition, results, and future direc- tions from a united states demonstration. Assessing Writing, 20:53-76.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Contrasting state-of-the-art automated scoring of essays: Analysis",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Shermis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hamner",
"suffix": ""
}
],
"year": 2012,
"venue": "Annual national council on measurement in education meeting",
"volume": "",
"issue": "",
"pages": "14--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark D Shermis and Ben Hamner. 2012. Contrasting state-of-the-art automated scoring of essays: Analy- sis. In Annual national council on measurement in education meeting, pages 14-16.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "A disciplined approach to neural network hyper-parameters: Part 1-learning rate, batch size, momentum, and weight decay",
"authors": [
{
"first": "N",
"middle": [],
"last": "Leslie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.09820"
]
},
"num": null,
"urls": [],
"raw_text": "Leslie N Smith. 2018. A disciplined approach to neu- ral network hyper-parameters: Part 1-learning rate, batch size, momentum, and weight decay. arXiv preprint arXiv:1803.09820.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Energy and policy considerations for deep learning in nlp",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Ananya",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in nlp. Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Multiple-attribute text style transfer",
"authors": [
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"Michael"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Y-Lan",
"middle": [],
"last": "Boureau",
"suffix": ""
}
],
"year": null,
"venue": "Age",
"volume": "18",
"issue": "24",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandeep Subramanian, Guillaume Lample, Eric Michael Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y-Lan Boureau. Multiple-attribute text style transfer. Age, 18(24):65.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "A neural approach to automated essay scoring",
"authors": [
{
"first": "Kaveh",
"middle": [],
"last": "Taghipour",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1882--1891",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaveh Taghipour and Hwee Tou Ng. 2016. A neu- ral approach to automated essay scoring. In Pro- ceedings of Empirical Methods in Natural Language Processing, pages 1882-1891.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Bert rediscovers the classical nlp pipeline",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. In Pro- ceedings of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Automated assessment of nonnative learner essays: Investigating the role of linguistic features",
"authors": [
{
"first": "Sowmya",
"middle": [],
"last": "Vajjala",
"suffix": ""
}
],
"year": 2018,
"venue": "International Journal of Artificial Intelligence in Education",
"volume": "28",
"issue": "1",
"pages": "79--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sowmya Vajjala. 2018. Automated assessment of non- native learner essays: Investigating the role of lin- guistic features. International Journal of Artificial Intelligence in Education, 28(1):79-105.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Coherence modeling for the automated assessment of spontaneous spoken responses",
"authors": [
{
"first": "Xinhao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Keelan",
"middle": [],
"last": "Evanini",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Zechner",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the North American chapter of the Association for Computational Linguistics: Human language technologies",
"volume": "",
"issue": "",
"pages": "814--819",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinhao Wang, Keelan Evanini, and Klaus Zechner. 2013. Coherence modeling for the automated as- sessment of spontaneous spoken responses. In Pro- ceedings of the North American chapter of the Asso- ciation for Computational Linguistics: Human lan- guage technologies, pages 814-819.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Why They Can't Write: Killing the Five-Paragraph Essay and Other Necessities",
"authors": [
{
"first": "John",
"middle": [],
"last": "Warner",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Warner. 2018. Why They Can't Write: Killing the Five-Paragraph Essay and Other Necessities. JHU Press.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "A framework for evaluation and use of automated scoring. Educational measurement: issues and practice",
"authors": [
{
"first": "Xiaoming",
"middle": [],
"last": "David M Williamson",
"suffix": ""
},
{
"first": "F Jay",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Breyer",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "31",
"issue": "",
"pages": "2--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Williamson, Xiaoming Xi, and F Jay Breyer. 2012. A framework for evaluation and use of au- tomated scoring. Educational measurement: issues and practice, 31(1):2-13.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Generalizability of automated scores of writing quality in grades 3-5",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Dandan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Micheal",
"middle": [
"P"
],
"last": "Sandbank",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Hebert",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Educational Psychology",
"volume": "111",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua Wilson, Dandan Chen, Micheal P Sandbank, and Michael Hebert. 2019. Generalizability of auto- mated scores of writing quality in grades 3-5. Jour- nal of Educational Psychology, 111(4):619.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Automated writing evaluation and feedback: Multiple metrics of efficacy",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Rod",
"middle": [
"D"
],
"last": "Roscoe",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Educational Computing Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua Wilson and Rod D Roscoe. 2019. Automated writing evaluation and feedback: Multiple metrics of efficacy. Journal of Educational Computing Re- search.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "Formative essay feedback using predictive scoring models",
"authors": [
{
"first": "Bronwyn",
"middle": [],
"last": "Woods",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Adamson",
"suffix": ""
},
{
"first": "Shayne",
"middle": [],
"last": "Miel",
"suffix": ""
},
{
"first": "Elijah",
"middle": [],
"last": "Mayfield",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACM Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "2071--2080",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bronwyn Woods, David Adamson, Shayne Miel, and Elijah Mayfield. 2017. Formative essay feedback using predictive scoring models. In Proceedings of ACM Conference on Knowledge Discovery and Data Mining, pages 2071-2080. ACM.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5754--5764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5754-5764.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Modeling coherence in esol learner texts",
"authors": [
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the ACL Workshop on Building Educational Applications Using NLP",
"volume": "",
"issue": "",
"pages": "33--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helen Yannakoudakis and Ted Briscoe. 2012. Model- ing coherence in esol learner texts. In Proceedings of the ACL Workshop on Building Educational Ap- plications Using NLP, pages 33-43. Association for Computational Linguistics.",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2979--2989",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of Empiri- cal Methods in Natural Language Processing, pages 2979-2989.",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "Learning gender-neutral word embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yichao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4847--4853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018. Learning gender-neutral word embeddings. In Proceedings of Empirical Methods in Natural Language Processing, pages 4847-4853.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "1 https://course.fast.ai/ Illustration of cyclical (top), two-period cyclical (middle, log y-scale), and 1-cycle (bottom) learning rate curricula over N epochs.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "also presents results for DistilBERT. Our work verifies prior published claims of speed",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "QWK (top) and training time (bottom, in seconds) and for 5-fold cross-validation of 1-cycle neural fine-tuning on ASAP datasets 2-6, for BERT (left) and DistilBERT (right).",
"uris": null
},
"TABREF0": {
"content": "<table><tr><td>Model</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td></tr><tr><td>Rodriguez (best)</td><td colspan=\"5\">.70 .72 .82 .82 .82</td></tr><tr><td>N-Grams</td><td colspan=\"5\">.71 .71 .78 .80 .79</td></tr><tr><td>Embeddings</td><td colspan=\"5\">.42 .41 .60 .49 .36</td></tr><tr><td>BERT-CLR</td><td colspan=\"5\">.66 .70 .80 .80 .79</td></tr><tr><td>BERT-1CYC</td><td colspan=\"5\">.64 .71 .82 .81 .79</td></tr><tr><td>BERT Features</td><td colspan=\"5\">.61 .59 .75 .75 .74</td></tr><tr><td>DistilBERT</td><td colspan=\"5\">.65 .70 .82 .81 .79</td></tr><tr><td>N-Gram Gap</td><td colspan=\"5\">-.05 .00 .04 .01 .00</td></tr><tr><td colspan=\"6\">of performance as other methods, slightly un-</td></tr><tr><td colspan=\"6\">derperforming previous published results. This</td></tr><tr><td colspan=\"6\">is likely an underestimate, due to the complex-</td></tr><tr><td colspan=\"6\">ity of hyperparameter optimization and curricu-</td></tr><tr><td colspan=\"3\">lum learning for Transformers.</td><td/><td/><td/></tr></table>",
"text": "Performance on each of ASAP datasets 2-6, in QWK. The final row shows the gap in QWK between the best neural model and the N-gram baseline. Human IRR .80 .77 .85 .74 .74 Hewlett .74 .75 .82 .83 .78 Taghipour .69 .69 .81 .81 .82 Woods .71 .71 .81 .82 .83 Cozma .73 .68 .83 .83 .83 Rodriguez (BERT) .68 .72 .80 .81 .81",
"html": null,
"type_str": "table",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>Model</td><td>F</td><td>T</td><td>P</td><td>Total</td></tr><tr><td>Embeddings</td><td>93</td><td>6</td><td>1</td><td>100</td></tr><tr><td>N-Grams</td><td colspan=\"2\">82 27</td><td>2</td><td>111</td></tr><tr><td colspan=\"3\">BERT Features 213 10</td><td>1</td><td>224</td></tr><tr><td>DistilBERT</td><td colspan=\"2\">1,972</td><td colspan=\"2\">108 2,080</td></tr><tr><td>BERT-1CYC</td><td colspan=\"2\">2,956</td><td colspan=\"2\">192 3,148</td></tr><tr><td>BERT-CLR</td><td colspan=\"2\">11,309</td><td colspan=\"2\">210 11,519</td></tr></table>",
"text": "Cumulative experiment runtime, in seconds, of feature extraction (F), model training (T), and predicting on test sets (P), for ASAP datasets 2-6 with 5fold cross-validation. Models with 1-cycle fine-tuning are measured at 5 epochs.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}