ACL-OCL / Base_JSON /prefixB /json /bea /2020.bea-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:09:55.251955Z"
},
"title": "An empirical investigation of neural methods for content scoring of science explanations",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Riordan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California-Berkeley",
"location": {}
},
"email": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Bichler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California-Berkeley",
"location": {}
},
"email": ""
},
{
"first": "Allison",
"middle": [],
"last": "Bradford",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California-Berkeley",
"location": {}
},
"email": ""
},
{
"first": "Jennifer",
"middle": [
"King"
],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California-Berkeley",
"location": {}
},
"email": ""
},
{
"first": "Korah",
"middle": [],
"last": "Wiley",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California-Berkeley",
"location": {}
},
"email": ""
},
{
"first": "Libby",
"middle": [],
"last": "Gerard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California-Berkeley",
"location": {}
},
"email": ""
},
{
"first": "Marcia",
"middle": [
"C"
],
"last": "Linn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California-Berkeley",
"location": {}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Ets",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California-Berkeley",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "With the widespread adoption of the Next Generation Science Standards (NGSS), science teachers and online learning environments face the challenge of evaluating students' integration of different dimensions of science learning. Recent advances in representation learning in natural language processing have proven effective across many natural language processing tasks, but a rigorous evaluation of the relative merits of these methods for scoring complex constructed response formative assessments has not previously been carried out. We present a detailed empirical investigation of feature-based, recurrent neural network, and pre-trained transformer models on scoring content in real-world formative assessment data. We demonstrate that recent neural methods can rival or exceed the performance of feature-based methods. We also provide evidence that different classes of neural models take advantage of different learning cues, and pre-trained transformer models may be more robust to spurious, dataset-specific learning cues, better reflecting scoring rubrics.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "With the widespread adoption of the Next Generation Science Standards (NGSS), science teachers and online learning environments face the challenge of evaluating students' integration of different dimensions of science learning. Recent advances in representation learning in natural language processing have proven effective across many natural language processing tasks, but a rigorous evaluation of the relative merits of these methods for scoring complex constructed response formative assessments has not previously been carried out. We present a detailed empirical investigation of feature-based, recurrent neural network, and pre-trained transformer models on scoring content in real-world formative assessment data. We demonstrate that recent neural methods can rival or exceed the performance of feature-based methods. We also provide evidence that different classes of neural models take advantage of different learning cues, and pre-trained transformer models may be more robust to spurious, dataset-specific learning cues, better reflecting scoring rubrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Next Generation Science Standards (NGSS) call for the integration of three dimensions of science learning: disciplinary core ideas (DCIs), cross-cutting concepts (CCCs), and science and engineering practices (SEPs) (NGSS Lead States, 2013) . Science teachers can promote knowledge integration of these dimensions using constructed response (CR) formative assessments to help their students build on productive ideas, fill in knowledge gaps, and reconcile conflicting ideas. However, the time burden associated with reading and scoring student responses to CR assessment items often leads to delays in evaluating student ideas. Such delays potentially make subsequent instructional interventions less impactful on student learn-ing. Effective automated methods to score student responses to NGSS-aligned CR assessment items hold the potential to allow teachers to provide instruction that addresses students' developing understandings in a more efficient and timely manner and can increase the amount of time teachers have to focus on classroom instruction and provide targeted student support.",
"cite_spans": [
{
"start": 219,
"end": 243,
"text": "(NGSS Lead States, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, we describe a set of CR formative assessment items that call for students to express and integrate ideas across multiple dimensions of the NGSS. We collected student responses to each item in multiple middle school science classrooms and trained models to automatically score the content of responses with respect to a set of rubrics. This study explores the effectiveness of three classes of models for content scoring of science explanations with complex rubrics: feature-based models, recurrent neural networks, and pre-trained transformer networks. Specifically, we investigate the following questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) What is the relative effectiveness of automated content scoring models from different model classes on scoring science explanations for both (a) holistic knowledge integration and (b) NGSS dimensions?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Do highly accurate model classes capture similar or different aspects of scoring rubrics?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Methods",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on constructed response (CR) items for formative assessments during science units for middle school students accessed via an online classroom system Linn et al., 2014) . In past research, items that assessed NGSS performance expectations (PEs) were scored with a single knowledge integration (KI) rubric (Liu et al., 2016) . KI involves a process of building on and strengthening science understanding by incorporating new ideas and sorting out alternative perspectives using evidence. The KI rubric used to score student short essays rewards students for linking evidence to claims and for adding multiple evidence-claim links to their explanations (Linn and Eylon, 2011) . In this study, we develop items that solicit student reasoning about two or more NGSS dimensions of DCIs, CCCs, and SEPs. We score each item for KI and NGSS \"subscores\" relating to the DCIs, CCCs, and practices.",
"cite_spans": [
{
"start": 158,
"end": 176,
"text": "Linn et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 313,
"end": 331,
"text": "(Liu et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 659,
"end": 681,
"text": "(Linn and Eylon, 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2.1"
},
{
"text": "In this section we describe the design of the CR items that comprise the datasets for the content scoring models. The CR items formatively assess student understanding of multiple NGSS dimensions, namely, using SEPs while demonstrating integrated understanding of DCIs and CCCs. We designed formative assessment items and associated rubrics for four units currently used in the online classroom system: Musical Instruments (MI), Photosynthesis and Cellular Respiration (PS), Solar Ovens (SO), and Thermodynamics Challenge (TC).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring item and rubric design",
"sec_num": "3"
},
{
"text": "Musical Instruments and the Physics of Sound Waves (MI). The Musical Instruments unit engages students in testing and refining their ideas about the properties of sound waves (wavelength, frequency, amplitude, and pitch) and guides them in applying what they learn to design and build their own instrument, a water xylophone. The CR item we designed aligns with the NGSS PE MS-PS4-2 PE and assesses students' understanding of the relationship of pitch and frequency (DCI) and the characteristics of a sound wave when transmitted through different materials (CCC). Students are prompted to distinguish how the pitch of the sound made by tapping a full glass of water compares to the pitch made by tapping an empty glass. In their answer, they are asked to explain why they think the pitch of the sound waves generated by striking the two glasses will be the same or different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring item and rubric design",
"sec_num": "3"
},
{
"text": "Photosynthesis and Cellular Respiration (PS). This unit engages students in exploring the processes of photosynthesis and cellular respiration by interacting with dynamic models at the molecular level. We designed a CR item that aligns with NGSS performance expectation MS-LS1-6 that asks students to express an integrated explanation of how photosynthesis supports the survival of both plants and animals. This item explicitly solicits students' ideas related to the CCC of matter cycling (i.e. change) and energy flow (i.e. movement): \"Write an energy story below to explain your ideas about how animals get and use energy from the sun to survive. Be sure to explain how energy and matter move AND how energy and matter change.\" Successful responses demonstrate proficiency in the SEP of constructing a scientific argument and reflect the synthesis of the DCIs and CCCs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring item and rubric design",
"sec_num": "3"
},
{
"text": "Solar Ovens (SO). The Solar Ovens unit asks students to collect evidence to agree or disagree with a claim made by a fictional peer about the functioning of a solar oven. Students work with an interactive model where they explore how different variables such as the size and capacity of a solar oven affect the transformation of energy from the sun. We designed a CR item that addresses NGSS PE MS-PS3-3 and assesses students for both the CCC of energy transfer and transformation and the SEP of analyzing and interpreting data. After working with the interactive model, students respond to the CR item with the prompt: \"Explain why David's claim is correct or incorrect using the evidence you collected from the model. Be sure to discuss how the movement of energy causes one solar oven to heat up faster than the other.\" Thermodynamics Challenge (TC). The Thermodynamics Challenge unit asks students to determine the best material for insulating a cold beverage using an online experimentation model. We designed a CR item that aligns with the NGSS PE MS-PS3-3 and assesses student performance proficiency with the targeted DCIs in the PE, understanding of the SEP of planning and carrying out an investigation, and the integration of both of these to construct a coherent and valid explanation. The CR item prompts students to explain the rationale behind their experiment plans with the model, using both key conceptual ideas as well as their understanding of experimentation as a scientific practice: \"Explain WHY the experiments you [plan to test] are the most important ones for giving you evidence to write your report. Be sure to use your knowledge of insulators, conductors, and heat energy transfer to discuss the tests you chose as well as the ones you didn't choose.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring item and rubric design",
"sec_num": "3"
},
{
"text": "We designed three scoring rubrics for each item corresponding to two \"subscores\" representing the degree to which the written responses expressed PE-specific ideas, concepts, and practices and one KI score that represents how the responses integrated these elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring item and rubric design",
"sec_num": "3"
},
{
"text": "NGSS subscore rubrics. To evaluate the written responses for the presence of the DCIs, CCCs, and SEPs, we designed subscore rubrics for two of the three dimensions (Table 1) . Specifically, we synthesized the ideas, concepts, and practices described in the \"evidence statement\" documents of each targeted performance expectation to develop the evaluation criteria. We assigned each response a score on a scale of 1 to 3, corresponding to the absence, partial presence, or complete presence of the ideas, concepts, or practices.",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 173,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Scoring item and rubric design",
"sec_num": "3"
},
{
"text": "KI score rubrics. The ideas targeted by the KI scoring rubrics aligned with subsets of the ideas described in the evidence statements. For example, the KI scoring rubrics for the Photosynthesis item evaluated written responses for the presence and linkage of five science ideas related to energy and matter transformation during photosynthesis. KI rubrics used a scale of 1 to 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring item and rubric design",
"sec_num": "3"
},
{
"text": "Participants were middle school students from 11 schools. Students engaged in the science units and contributed written responses to the CR items as part of pre-and post-tests. Across schools, 44% of students received free or reduced price lunch and 77% were non-white.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data collection",
"sec_num": "3.1"
},
{
"text": "All items were scored by two researchers using the item-specific subscore and KI and rubrics described above. To ensure coding consistency, both researchers coded at least 10% of the items individually and resolved any disagreements through discussion. After the inter-rater reliability reached greater than 0.90, all of the remaining items were coded by one researcher (cf. the procedure in Liu et al. (2016)) 1 . Table 2 displays the dataset sizes and mean words per response for the KI scores and NGSS subscores, and Figure 1 depicts the respective score distributions. Among the holistic KI scores, the highest score of 5 had relatively fewer responses than other score levels. By examining the shape of the distributions of scores across the NGSS subscores, we can see that students' expression of different aspects of NGSS performance expectations differed across items. For the Musical Instruments and Photosynthesis items, students expressed the disciplinary core ideas less than the cross-cutting concepts. For both the Solar Ovens and Thermodynamics Challenge items, students often did not explicitly articulate science concepts. The Thermodynamics Challenge item was particularly challenging, as many students did not express the targeted science or experimentation concepts.",
"cite_spans": [],
"ref_spans": [
{
"start": 415,
"end": 422,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 520,
"end": 528,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data collection",
"sec_num": "3.1"
},
{
"text": "Item PE DCI CCC SEP Musical Instruments MS-PS4-2 \u2022 \u2022 Photosynthesis MS-LS1-6 \u2022 \u2022 Solar Ovens MS-PS3-3 \u2022 \u2022 Thermodynamics Challenge MS-PS3-3 \u2022 \u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data collection",
"sec_num": "3.1"
},
{
"text": "Content scoring models were built for each item and score type (knowledge integration and two NGSS dimensions). Models for each score type were trained independently on data for each item. In this way, the three models for an item formed different \"perspectives\" on the content of each response. Human-scored training data for the NGSS dimension models comprised either a subset of or overlapped with the training data for the KI models. The models were trained to predict an ordinal score from each response's text, without access to expert-authored model responses or data augmentation. This type of \"instance-based\" model (cf. Horbach and Zesch (2019) ) is effective when model responses are not available and can score responses of any length without additional modeling complexity. As we focus on content scoring, the models do not consider grammatical or usage errors that do not relate to the content of each response.",
"cite_spans": [
{
"start": 630,
"end": 654,
"text": "Horbach and Zesch (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Content scoring models",
"sec_num": "3.2"
},
{
"text": "The feature-based model is a nonlinear support vector regression (SVR) model. The model is trained on a feature set of binarized word n-grams with n in {1, 2}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content scoring models",
"sec_num": "3.2"
},
{
"text": "The RNN model uses a simple architecture with pre-trained word embeddings and pooling of hidden states. Pre-trained word embeddings are processed by a bidirectional GRU encoder. The hidden states of the GRU are aggregated by a max pooling mechanism (Shen et al., 2018) . The output of the encoder is aggregated in a fully-connected feedforward layer with sigmoid activation that computes a scalar output for the predicted score. Despite its simplicity, this architecture has achieved state-ofthe-art performance on benchmark content scoring datasets (Riordan et al., 2019) .",
"cite_spans": [
{
"start": 249,
"end": 268,
"text": "(Shen et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 550,
"end": 572,
"text": "(Riordan et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Content scoring models",
"sec_num": "3.2"
},
{
"text": "For the pre-trained transformer model, we used a standard instance of the BERT model (Devlin et al., 2019) . BERT is a bidirectional transformer model trained on the tasks of masked token prediction and next sentence prediction across very large corpora (BooksCorpus and English Wikipedia). During training, a special token ' [CLS] ' is added to the beginning of each input sequence. To make predictions, the learned representation for this token is processed by an additional layer with nonlinear activation, outputting a score prediction. The model was 'fine-tuned' by training the additional layer's weights on each item's dataset.",
"cite_spans": [
{
"start": 85,
"end": 106,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 326,
"end": 331,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Content scoring models",
"sec_num": "3.2"
},
{
"text": "hyperparameter optimization SVR model. The SVR models used an RBF kernel. Hyperparameters C and gamma were tuned on the validation sets and were optimized by root mean squared error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation, model training, and",
"sec_num": "3.3"
},
{
"text": "RNN model. Word tokens were embedded with GloVe 100 dimension vectors (Pennington et al., 2014) and fine-tuned during training. Word tokens that were not found in the embeddings vocabulary were mapped to a randomly initialized UNK embedding. On conversion to tensors, responses were padded to the same length in a batch; these padding tokens are masked out during model training. Prior to training, responses were scaled to [0, 1] to form the input to the networks. The scaled scores were converted back to their original range for evaluation.",
"cite_spans": [
{
"start": 70,
"end": 95,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation, model training, and",
"sec_num": "3.3"
},
{
"text": "The GRUs were 1 layer with a hidden state of size 250. The RNN models were trained with a mean squared error loss. For this investigation, the RNN was optimized with RMSProp with \u03c1 of 0.9, learning rate 0.001, batch size 32, and gradient clipping (10.0). We used an exponential moving average of the model's weights for training (decay rate = 0.999) (Adhikari et al., 2019) . In the tuning phase, models were trained for 50 epochs.",
"cite_spans": [
{
"start": 350,
"end": 373,
"text": "(Adhikari et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation, model training, and",
"sec_num": "3.3"
},
{
"text": "Pretrained transformer model. We used the bertbase-uncased pre-trained model (Wolf et al., 2019) and the Adam optimizer. On the Photosynthesis dataset, due to memory requirements, training required a batch size of 8; all other datasets were trained with a batch size 16. The learning rate was tuned individually for each dataset with a grid of {2e-5, 3e-5, 5e-5}. Matching the RNN model, an exponential moving average over the model's weights was employed during training. Hyperparameters were tuned for 20 epochs.",
"cite_spans": [
{
"start": 77,
"end": 96,
"text": "(Wolf et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation, model training, and",
"sec_num": "3.3"
},
{
"text": "For all experiments, we trained models with 10-fold cross validation with train/validation/test splits, evaluating on pooled (concatenated) predictions across folds. We split the data into 80% train, 10% validation, and 10% test. For hyperparameter tuning, we trained on each train split and evaluated performance on the validation split, retaining the predictions from the best performance across epochs and the epoch on which that performance was observed. We pooled the predictions from all folds on the validation sets, evaluated performance, and selected the best-performing configuration of hyperparameters. For final model training, we trained models on combined train and validation splits, again with 10-fold cross-validation, to the median best epoch across folds from the hyperparameter tuning phase. Final performance was evaluated on the pooled predictions from the test splits. This training and evaluation procedure improves the stability of estimates of performance during both the tuning and final testing phases and makes use of more data for training and evaluating the final models, providing better estimates of model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation, model training, and",
"sec_num": "3.3"
},
{
"text": "To evaluate the agreement of human scores and machine scores, we report Pearson's correlation, quadratic weighted kappa (QWK), and mean squared error (MSE). QWK is a measure of agreement that ranges between 0 and 1 and is motivated by accounting for chance agreement (Fleiss and Cohen, 1973 ",
"cite_spans": [
{
"start": 267,
"end": 290,
"text": "(Fleiss and Cohen, 1973",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "3.4"
},
{
"text": "The models for the KI scores showed mostly good agreement with human scores (Table 3) . QWK for the Musical Instruments, Photosynthesis, and Solar Ovens items was substantially higher than the standard 0.7 recommended for human-machine agreement in real-word automated scoring applications (Williamson et al., 2012) . For NGSS subscore models (Table 4) , those with more balanced score distributions (cf. Figure 1) showed good human-machine agreement, while the models trained on the most skewed data distributions showed lower levels of human-machine agreement. Specifically, Solar Ovens-Science and the Thermodynamics Challenge subscore models were trained on data where about 80% of responses had the lowest score. Each of these models' agreement with the human-scored data was relatively low and significantly below the 0.7 QWK threshold.",
"cite_spans": [
{
"start": 290,
"end": 315,
"text": "(Williamson et al., 2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 76,
"end": 85,
"text": "(Table 3)",
"ref_id": "TABREF4"
},
{
"start": 343,
"end": 352,
"text": "(Table 4)",
"ref_id": "TABREF6"
},
{
"start": 405,
"end": 414,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Human-machine agreement",
"sec_num": "4.1"
},
{
"text": "Across both KI score models and NGSS subscore models, the pre-trained transformer models showed higher human-machine agreement than both the SVR and RNN models in almost all cases. On the KI score datasets, the performance improve-ment from the PT models was relatively modest, except for the Photosynthesis dataset, where a larger improvement was observed. On the NGSS subscore datasets, the improvement from the PT models was often larger. This may be the result of stronger representations from the pretrained models compensating from the smaller training dataset sizes. At the same time, RNN models also performed well on data-impoverished datasets such as Photosynthesis-CCC and Solar Ovens-science.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human-machine agreement",
"sec_num": "4.1"
},
{
"text": "The cross-validation training and evaluation procedure employed here poses a challenge to statistically estimating the strengths of the differences between methods since the folds are not independent. Here we employ replicability analysis for multiple comparisons (Reichart et al., 2018; Dror et al., 2017) . We use bootstrap-based significance testing on each fold for the final model on each dataset and then perform K-Bonferonni replicability analysis. We define significance as rejecting the null hypothesis of no difference for at least half of the folds. The results of these hypothesis tests are shown in Tables 3 and 4 . For example, S indicates the model in that row (PT) performed significantly better than the SVR model (similarly for the RNN models). Although this hypothesis testing framework is conservative, the results support the conclusion that the pre-trained transformer models' performance was strong.",
"cite_spans": [
{
"start": 264,
"end": 287,
"text": "(Reichart et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 288,
"end": 306,
"text": "Dror et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 612,
"end": 626,
"text": "Tables 3 and 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Human-machine agreement",
"sec_num": "4.1"
},
{
"text": "In this section, we explore the differences in the two neural models (RNN and PT) in more detail by looking at patterns of errors. We focus on instancelevel saliency maps -gradient-based methods that identify the importance of tokens to the model by examining the gradient of the loss. For each dataset, we sample 100 responses and generate saliency maps for each. We use the simple gradient method (Simonyan et al., 2014) via AllenNLP (Wallace et al., 2019) . The item developers manually analyzed the generated saliency maps for each response and model.",
"cite_spans": [
{
"start": 399,
"end": 422,
"text": "(Simonyan et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 436,
"end": 458,
"text": "(Wallace et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "We analyzed two sets of cases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "1. One neural model accurately predicted the human score while the other did not. How do the error patterns in these cases illustrate how the models each learned differently from the training data?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "2. Both models incorrectly predicted the human score, and moreover predicted the same incorrect score. Do the models make the wrong prediction for the same or different reasons?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "In the following, due to space constraints, we focus on error analysis for the scoring model for the Musical Instruments knowledge integration dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "One correct, one incorrect. Cases where one model accurately predicted the human score while the other did not illuminated several differences in the two neural models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "The RNN model tended to ignore or deemphasize some keywords, while overemphasizing high frequency and function words. For example, Figure 2a shows a simple example where the RNN fails to emphasize the keyword pitch. The BERT model accurately registers this word as salient, and predicts the correct score. Similarly, in Figure 2b , the RNN misses the keyphrase full glass while the BERT model catches it. In Figure 2c , the RNN spuriously treats the function words when and you as salient and over-predicts the score.",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 140,
"text": "Figure 2a",
"ref_id": null
},
{
"start": 320,
"end": 329,
"text": "Figure 2b",
"ref_id": null
},
{
"start": 408,
"end": 417,
"text": "Figure 2c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "For its part, the BERT model may de-emphasize many high frequency words but at the same time may regard discourse markers as salient. An example is in Figure 3a , where the BERT model emphasizes because since, and this may in part help the model reach the correct prediction. If the BERT model is able to better learn important keywords (while ignoring more function words), it may sometimes \"overlearn\" the importance of those tokens, leading to over-prediction of scores. There are several examples where the model uses the word piece ##brate to overpredict a score (Figure 3b ).",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 160,
"text": "Figure 3a",
"ref_id": "FIGREF1"
},
{
"start": 568,
"end": 578,
"text": "(Figure 3b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "Both incorrect with the same prediction. In many cases, the models made the same incorrect predictions for different reasons. An example is Figure 3c , where the RNN emphasizes deeper and dense while the BERT model focuses on because and cup. Overall, the same differences in the models identified above held for these cases of making the same incorrect prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 150,
"text": "Figure 3c",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "In general, although there was some variability across models, both models correctly identified the keywords necessary for scoring responses correctly, leading to good human-machine agreement. The RNN model may be more sensitive to tokens that are good indicators of the score in the training data (either high or low) but not in language in general, such as high frequency and function words, while 148053 score=3 prediction=2 the pitch gets a lot lower 148053 score=3 prediction=3 [CLS] the pitch gets a lot lower [SEP] (a)",
"cite_spans": [
{
"start": 516,
"end": 521,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5"
},
{
"text": "The tap of a full glass is more low pitched and an empty glass is more high pitched because there is no bellow 207529 score=3 prediction=3 [CLS] the tap of a full glass is more low pitched and an empty glass is more high pitched because there is no bell ##ow [SEP] (b)",
"cite_spans": [
{
"start": 259,
"end": 264,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "score=3 prediction=2",
"sec_num": "207529"
},
{
"text": "When you tap on a full glass the pitch stays the same as if you were tapping on an empty glass because you are still tapping on a glass that is going to make a high pitched sound no matter if it is full or not .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "score=2 prediction=3",
"sec_num": "147925"
},
{
"text": "[CLS] when you tap on a full glass the pitch stays the same as if you were tapping on an empty glass because you are still tapping on a glass that is going to make a high pitched sound no matter if it is full or not . [SEP] (c) Figure 2 : Error analysis: RNN model trends. In each example, the RNN model's saliency map appears on top.",
"cite_spans": [
{
"start": 218,
"end": 223,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 228,
"end": 236,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "score=2 prediction=2",
"sec_num": "147925"
},
{
"text": "The pitch of the tapped full glass is lower than the pitch of the tapped empty glass because since there is water inside you are not going to be able to hear it as much .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "score=3 prediction=2",
"sec_num": "237142"
},
{
"text": "[CLS] the pitch of the tapped full glass is lower than the pitch of the tapped empty glass because since there is water inside you are not going to be able to hear it as much . [SEP] (a)",
"cite_spans": [
{
"start": 177,
"end": 182,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "score=3 prediction=3",
"sec_num": "237142"
},
{
"text": "The one taht is full will vibrate less so it will be higher than the one that is empty .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "score=3 prediction=3",
"sec_num": "148661"
},
{
"text": "[CLS] the one ta ##ht is full will vi ##brate less so it will be higher than the one that is empty . [SEP] (b) 176754 score=4 prediction=3 the cup with water has a deeper sound because its changing through the dense water but the cup with no water stays the same because the sound wave does n't have to go through anything or change anything .",
"cite_spans": [
{
"start": 101,
"end": 106,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "score=3 prediction=4",
"sec_num": "148661"
},
{
"text": "[CLS] the cup with water has a deeper sound because its changing through the dense water but the cup with no water stays the same because the sound wave doesn ' t have to go through anything or change anything . [SEP] (c) BERT's pre-training regime may equip it to reduce any reliance on such tokens.",
"cite_spans": [
{
"start": 212,
"end": 217,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "score=4 prediction=3",
"sec_num": "176754"
},
{
"text": "Notably, however, while the models usually made good use of keyword evidence to arrive at correct scores, when the models made inaccurate predictions, it was often because the response had the right vocabulary but the wrong science. For example, in the Musical Instruments item, a response might contain pitch, lower, density, and vibrations, but the response might attribute the lower pitch to the empty glass. At least two issues were observed in cases of model mis-prediction: (1) students used anaphoric it to refer to key concepts (e.g., full glass or empty glass), but the models do not incorporate anaphora resolution capabilities; (2) models fail to associate the right keywords with the right concepts, in the way that human raters did.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "score=4 prediction=3",
"sec_num": "176754"
},
{
"text": "The task of automated content scoring has recently gained more attention (Kumar et al., 2017; Riordan et al., 2017; Burrows et al., 2015; Shermis, 2015) . Our work is similar to Mizumoto et al. (2019) , who developed a multi-task neural model for assigning an overall holistic score as well as content-based analytic subscores. We leave a multi-task formulation of our application setting for future work. Sung et al. (2019) demonstrated state-of-the-art performance for similarity-based content scoring on the SemEval benchmark dataset (Dzikovska et al., 2016) . In this work, we use pre-trained transformer models for instance-based content scoring (cf. Horbach and Zesch (2019) ). That is, we use whole responses as training data and fine-tune pretrained representations for response tokens on the content score prediction task.",
"cite_spans": [
{
"start": 73,
"end": 93,
"text": "(Kumar et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 94,
"end": 115,
"text": "Riordan et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 116,
"end": 137,
"text": "Burrows et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 138,
"end": 152,
"text": "Shermis, 2015)",
"ref_id": "BIBREF21"
},
{
"start": 178,
"end": 200,
"text": "Mizumoto et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 406,
"end": 424,
"text": "Sung et al. (2019)",
"ref_id": "BIBREF23"
},
{
"start": 537,
"end": 561,
"text": "(Dzikovska et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 656,
"end": 680,
"text": "Horbach and Zesch (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "Recently, methods have been introduced to incorporate \"saliency\" directly into the model training process (Ghaeini et al., 2019) . The current work focuses on interpreting the predictions of models trained without additional annotations (for an overview of interpretability in NLP, see Belinkov and Glass (2019) . Exploring the contribution of augmented datasets and training algorithms is future work. To our knowledge, our work is the first to to explore the relevance of the saliency in the predictions of neural methods for the content scoring task.",
"cite_spans": [
{
"start": 106,
"end": 128,
"text": "(Ghaeini et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 286,
"end": 311,
"text": "Belinkov and Glass (2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "We described a set of constructed response items for middle-school science curricula that simultaneously assess students on expression of NGSS Disciplineary Core Ideas (DCIs), Cross-Cutting Concepts (CCCs), and Science and Engineering Practices (SEPs), and the integrative linkages between each, as part of engaging in scientific explanations and argumentation. We demonstrated that human and automated scoring of such CRs for the NGSS dimensions (via independent subscores) and the integration of knowledge (via Knowledge Integration scores) is feasible. We demonstrated that automated scoring can be developed with promising accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Comparing feature-based, RNN, and pre-trained transformer models on these datasets, we observed that the pre-trained transformer models obtained higher rates of human-machine agreement on most holistic KI score and NGSS subscore datasets. While the RNN models were often competitive with the pre-trained transformer models, an analysis of the different kinds of errors made by each model type indicated that the pre-trained transformer models may be more robust to strong dataset-specific, but spurious, cues to score prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Results showed that, in the formative setting targeted by the online science learning environment used in this study, students often scored at the lowest levels of all three rubrics, which increased skewness in the datasets and likely contributed to reduced model accuracy. Future research will explore more robust methods for learning scoring models from less data in formative settings, especially from highly skewed score distributions, while continuing to provide accurate scoring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our findings demonstrate the ability to both develop and automatically score NGSS-aligned CR assessment items. With further refinement, we can provide teachers with both the instructional and technological assistance they need to effectively and efficiently support their students to demonstrate the multidimensional science learning called for by the NGSS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Datasets are not publicly available because of the IRBapproved consent procedure for participants (minors) in this research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Aoife Cahill for many useful discussions and three anonymous reviewers, Beata Beigman Klebanov, Debanjan Ghosh, and Nitin Madnani for helpful comments. This material is based upon work supported by the National Science Foundation under Grant No. 1812660. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Rethinking Complex Neural Network Architectures for Document Classification",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Adhikari",
"suffix": ""
},
{
"first": "Achyudh",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. Rethinking Complex Neural Net- work Architectures for Document Classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL-HLT).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Analysis Methods in Neural Language Processing: A Survey. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and James Glass. 2019. Analysis Methods in Neural Language Processing: A Survey. Transactions of the Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Eras and Trends of Automatic Short Answer Grading",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Burrows",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Artificial Intelligence in Education",
"volume": "25",
"issue": "1",
"pages": "60--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Burrows, Iryna Gurevych, and Benno Stein. 2015. The Eras and Trends of Automatic Short An- swer Grading. International Journal of Artificial In- telligence in Education, 25(1):60-117.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets",
"authors": [
{
"first": "Rotem",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "Gili",
"middle": [],
"last": "Baumer",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Bogomolov",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "471--486",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rotem Dror, Gili Baumer, Marina Bogomolov, and Roi Reichart. 2017. Replicability Analysis for Natu- ral Language Processing: Testing Significance with Multiple Datasets. Transactions of the Association for Computational Linguistics, 5:471-486.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The joint student response analysis and recognizing textual entailment challenge: making sense of student responses in educational applications. Language Resources and Evaluation",
"authors": [
{
"first": "O",
"middle": [],
"last": "Myroslava",
"suffix": ""
},
{
"first": "Rodney",
"middle": [
"D"
],
"last": "Dzikovska",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Nielsen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leacock",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "50",
"issue": "",
"pages": "67--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myroslava O. Dzikovska, Rodney D. Nielsen, and Claudia Leacock. 2016. The joint student response analysis and recognizing textual entailment chal- lenge: making sense of student responses in educa- tional applications. Language Resources and Evalu- ation, 50(1):67-93.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement",
"authors": [
{
"first": "L",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Fleiss",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "33",
"issue": "",
"pages": "613--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph L. Fleiss and Jacob Cohen. 1973. The equiv- alence of weighted kappa and the intraclass corre- lation coefficient as measures of reliability. Educa- tional and psychological measurement, 33(3):613- 619.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using Automated Scores of Student Essays to Support Teacher Guidance in Classroom Inquiry",
"authors": [
{
"first": "F",
"middle": [],
"last": "Libby",
"suffix": ""
},
{
"first": "Marcia",
"middle": [
"C"
],
"last": "Gerard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Linn",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Science Teacher Education",
"volume": "27",
"issue": "1",
"pages": "111--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Libby F. Gerard and Marcia C. Linn. 2016. Using Automated Scores of Student Essays to Support Teacher Guidance in Classroom Inquiry. Journal of Science Teacher Education, 27(1):111-129.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Saliency Learning: Teaching the Model Where to Pay Attention",
"authors": [
{
"first": "Reza",
"middle": [],
"last": "Ghaeini",
"suffix": ""
},
{
"first": "Xiaoli",
"middle": [
"Z"
],
"last": "Fern",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Shahbazi",
"suffix": ""
},
{
"first": "Prasad",
"middle": [],
"last": "Tadepalli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reza Ghaeini, Xiaoli Z. Fern, Hamed Shahbazi, and Prasad Tadepalli. 2019. Saliency Learning: Teach- ing the Model Where to Pay Attention. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (NAACL- HLT).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The influence of variance in learner answers on automatic content scoring",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Horbach",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
}
],
"year": 2019,
"venue": "Frontiers in Education",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Horbach and Torsten Zesch. 2019. The influ- ence of variance in learner answers on automatic content scoring. Frontiers in Education, 4:28.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Earth Mover's Distance Pooling over Siamese LSTMs for Automatic Short Answer Grading",
"authors": [
{
"first": "Sachin",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Soumen",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
},
{
"first": "Shourya",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2017,
"venue": "International Joint Conference on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sachin Kumar, Soumen Chakrabarti, and Shourya Roy. 2017. Earth Mover's Distance Pooling over Siamese LSTMs for Automatic Short Answer Grading. In International Joint Conference on Artificial Intelli- gence (IJCAI).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Science Learning and Instruction: Taking Advantage of Technology to Promote Knowledge Integration. Routledge",
"authors": [
{
"first": "Marcia",
"middle": [
"C"
],
"last": "Linn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bat-Sheva Eylon",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcia C. Linn and Bat-Sheva Eylon. 2011. Sci- ence Learning and Instruction: Taking Advantage of Technology to Promote Knowledge Integration. Routledge, New York.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Computer-guided inquiry to improve science learning",
"authors": [
{
"first": "Marcia",
"middle": [
"C"
],
"last": "Linn",
"suffix": ""
},
{
"first": "Libby",
"middle": [],
"last": "Gerard",
"suffix": ""
},
{
"first": "Kihyun",
"middle": [],
"last": "Ryoo",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Mcelhaney",
"suffix": ""
},
{
"first": "Lydia",
"middle": [],
"last": "Ou",
"suffix": ""
},
{
"first": "Anna",
"middle": [
"N"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rafferty",
"suffix": ""
}
],
"year": 2014,
"venue": "Science",
"volume": "344",
"issue": "6180",
"pages": "155--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcia C. Linn, Libby Gerard, Kihyun Ryoo, Kevin McElhaney, Ou Lydia Liu, and Anna N Rafferty. 2014. Computer-guided inquiry to improve science learning. Science, 344(6180):155-156.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Validation of Automated Scoring of Science Assessments",
"authors": [
{
"first": "Lydia",
"middle": [],
"last": "Ou",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"A"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Libby",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Marcia",
"middle": [
"C"
],
"last": "Gerard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Linn",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Research in Science Teaching",
"volume": "53",
"issue": "2",
"pages": "215--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ou Lydia Liu, Joseph A. Rios, Michael Heilman, Libby Gerard, and Marcia C. Linn. 2016. Validation of Au- tomated Scoring of Science Assessments. Journal of Research in Science Teaching, 53(2):215-233.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Analytic Score Prediction and Justification Identification in Automated Short Answer Scoring",
"authors": [
{
"first": "Tomoya",
"middle": [],
"last": "Mizumoto",
"suffix": ""
},
{
"first": "Hiroki",
"middle": [],
"last": "Ouchi",
"suffix": ""
},
{
"first": "Yoriko",
"middle": [],
"last": "Isobe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Reisert",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2019,
"venue": "14th Workshop on Innovative Use of NLP for Building Educational Applications (BEA@ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomoya Mizumoto, Hiroki Ouchi, Yoriko Isobe, Paul Reisert, Ryo Nagata, Satoshi Sekine, and Kentaro Inui. 2019. Analytic Score Prediction and Justifica- tion Identification in Automated Short Answer Scor- ing. In 14th Workshop on Innovative Use of NLP for Building Educational Applications (BEA@ACL).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Next Generation Science Standards: For States, By States",
"authors": [],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NGSS Lead States. 2013. Next Generation Science Standards: For States, By States. The National Academies Press, Washington, D.C.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Glove: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Hitchhiker's Guide to Testing Statistical Significance in Natural Language Processing",
"authors": [
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Rotem",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "Gili",
"middle": [],
"last": "Baumer",
"suffix": ""
},
{
"first": "Segev",
"middle": [],
"last": "Shlomov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roi Reichart, Rotem Dror, Gili Baumer, and Segev Shlomov. 2018. The Hitchhiker's Guide to Test- ing Statistical Significance in Natural Language Pro- cessing. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "How to account for mispellings: Quantifying the benefit of character representations in neural content scoring models",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Riordan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Flor",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Pugh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 14th Workshop on Innovative Use of NLP for Building Educational Applications (BEA@ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Riordan, Michael Flor, and Robert Pugh. 2019. How to account for mispellings: Quantifying the benefit of character representations in neural content scoring models. In Proceedings of the 14th Work- shop on Innovative Use of NLP for Building Educa- tional Applications (BEA@ACL).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Investigating neural architectures for short answer scoring",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Riordan",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Horbach",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Chong",
"middle": [
"Min"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "12th Workshop on Innovative Use of NLP for Building Educational Applications (BEA@EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Riordan, Andrea Horbach, Aoife Cahill, Torsten Zesch, and Chong Min Lee. 2017. Investigating neu- ral architectures for short answer scoring. In 12th Workshop on Innovative Use of NLP for Building Ed- ucational Applications (BEA@EMNLP).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms",
"authors": [
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wenlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Renqiang Min",
"suffix": ""
},
{
"first": "Qinliang",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dinghan Shen, Guoyin Wang, Wenlin Wang, Mar- tin Renqiang Min, Qinliang Su, Yizhe Zhang, Chun- yuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline Needs More Love: On Simple Word- Embedding-Based Models and Associated Pooling Mechanisms. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (ACL).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Contrasting state-of-the-art in the machine scoring of short-form constructed responses",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shermis",
"suffix": ""
}
],
"year": 2015,
"venue": "Educational Assessment",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark D Shermis. 2015. Contrasting state-of-the-art in the machine scoring of short-form constructed re- sponses. Educational Assessment, 20(1).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Vedaldi",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan, Andrea Vedaldi, and Andrew Zis- serman. 2014. Deep Inside Convolutional Net- works: Visualising Image Classification Models and Saliency Maps. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving Short Answer Grading Using Transformer-Based Pre-training",
"authors": [
{
"first": "Chul",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Tejas",
"suffix": ""
},
{
"first": "Nirmal",
"middle": [],
"last": "Dhamecha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mukhi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 20th International Conference on Artificial Intelligence in Education (AIED)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chul Sung, Tejas I. Dhamecha, and Nirmal Mukhi. 2019. Improving Short Answer Grading Using Transformer-Based Pre-training. In Proceedings of the 20th International Conference on Artificial Intel- ligence in Education (AIED).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Tuyls",
"suffix": ""
},
{
"first": "Junlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subra- manian, Matthew Gardner, and Sameer Singh. 2019. AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A framework for evaluation and use of automated scoring. Educational measurement: issues and practice",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Williamson",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "F",
"middle": [
"Jay"
],
"last": "Breyer",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "31",
"issue": "",
"pages": "2--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Williamson, Xiaoming Xi, and F. Jay Breyer. 2012. A framework for evaluation and use of au- tomated scoring. Educational measurement: issues and practice, 31(1):2-13.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the-art Natural Language Processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. HuggingFace's Trans- formers: State-of-the-art Natural Language Process- ing. ArXiv, abs/1910.03771.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Score distributions for (a) knowledge integration scores and (b) NGSS subscores.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Error analysis: Pre-trained transformer model trends. In each example, the pre-trained transformer model's saliency map appears on the bottom.",
"uris": null,
"num": null
},
"TABREF0": {
"text": "NGSS performance expectations (PE) and targeted components: disciplinary core idea (DCI), crosscutting concept (CCC), and science and engineering practices (SEP) targeted by each item.",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td/><td/><td>Mean</td></tr><tr><td>Item</td><td>Type</td><td>Responses</td><td>words per</td></tr><tr><td/><td/><td/><td>response</td></tr><tr><td>MI</td><td>KI</td><td>1306</td><td>25.40</td></tr><tr><td>PS</td><td>KI</td><td>1411</td><td>54.57</td></tr><tr><td>SO</td><td>KI</td><td>1740</td><td>31.87</td></tr><tr><td>TC</td><td>KI</td><td>994</td><td>31.73</td></tr><tr><td>MI</td><td>CCC DCI</td><td>1306</td><td>25.40</td></tr><tr><td>PS</td><td>CCC DCI</td><td>553</td><td>70.40</td></tr><tr><td>SO</td><td>SEP: eng CCC: sci</td><td>605</td><td>32.62</td></tr><tr><td>TC</td><td>SEP: exp DCI: sci</td><td>583</td><td>31.43</td></tr></table>",
"num": null
},
"TABREF1": {
"text": "Descriptive statistics for each item's dataset.",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF4": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table><tr><td>: Human-machine agreement for Knowledge</td></tr><tr><td>Integration (KI) score models. QWK = quadratic-</td></tr><tr><td>weighed kappa, MSE = mean squared error. SVR =</td></tr><tr><td>support vector regression, RNN = recurrent neural net-</td></tr><tr><td>work, PT = pre-trained Transformer. Sig. = signifi-</td></tr><tr><td>cance by bootstrap replicability analysis; see main text</td></tr><tr><td>for details.</td></tr></table>",
"num": null
},
"TABREF6": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}