ACL-OCL / Base_JSON /prefixW /json /W18 /W18-0501.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W18-0501",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:23:28.129275Z"
},
"title": "Using exemplar responses for training and evaluating automated speech scoring systems",
"authors": [
{
"first": "Anastassia",
"middle": [],
"last": "Loukina",
"suffix": "",
"affiliation": {},
"email": "aloukina@ets.org"
},
{
"first": "Klaus",
"middle": [],
"last": "Zechner",
"suffix": "",
"affiliation": {},
"email": "kzechner@ets.org"
},
{
"first": "James",
"middle": [],
"last": "Bruno",
"suffix": "",
"affiliation": {},
"email": "jbruno@ets.org"
},
{
"first": "Beata",
"middle": [
"Beigman"
],
"last": "Klebanov",
"suffix": "",
"affiliation": {},
"email": "bbeigmanklebanov@ets.org"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automated scoring engines are usually trained and evaluated against human scores and compared to the benchmark of human-human agreement. In this paper we compare the performance of an automated speech scoring engine using two corpora: a corpus of almost 700,000 randomly sampled spoken responses with scores assigned by one or two raters during operational scoring, and a corpus of 16,500 exemplar responses with scores reviewed by multiple expert raters. We show that the choice of corpus used for model evaluation has a major effect on estimates of system performance with r varying between 0.64 and 0.80. Surprisingly, this is not the case for the choice of corpus for model training: when the training corpus is sufficiently large, the systems trained on different corpora showed almost identical performance when evaluated on the same corpus. We show that this effect is consistent across several learning algorithms. We conclude that evaluating the model on a corpus of exemplar responses if one is available provides additional evidence about system validity; at the same time, investing effort into creating a corpus of exemplar responses for model training is unlikely to lead to a substantial gain in model performance.",
"pdf_parse": {
"paper_id": "W18-0501",
"_pdf_hash": "",
"abstract": [
{
"text": "Automated scoring engines are usually trained and evaluated against human scores and compared to the benchmark of human-human agreement. In this paper we compare the performance of an automated speech scoring engine using two corpora: a corpus of almost 700,000 randomly sampled spoken responses with scores assigned by one or two raters during operational scoring, and a corpus of 16,500 exemplar responses with scores reviewed by multiple expert raters. We show that the choice of corpus used for model evaluation has a major effect on estimates of system performance with r varying between 0.64 and 0.80. Surprisingly, this is not the case for the choice of corpus for model training: when the training corpus is sufficiently large, the systems trained on different corpora showed almost identical performance when evaluated on the same corpus. We show that this effect is consistent across several learning algorithms. We conclude that evaluating the model on a corpus of exemplar responses if one is available provides additional evidence about system validity; at the same time, investing effort into creating a corpus of exemplar responses for model training is unlikely to lead to a substantial gain in model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Systems that automatically score constructed responses in an assessment -such as essays or spoken responses -are typically trained and evaluated on a corpus of such test taker responses with scores assigned by trained human raters, considered to be the \"gold standard\" for both training and evaluation of the automated scoring system (Page, 1966; Attali and Burstein, 2006; Bernstein et al., 2010; Williamson et al., 2012) . Human raters follow certain agreed-upon scoring guidelines (\"rubrics\") that define the characteristics of a response for each discrete score level of the scoring scale. For instance, in the case of speech scoring, human raters may evaluate certain aspects of a test taker's speech production, such as fluency, pronunciation, prosody, vocabulary diversity, grammatical accuracy, content correctness, or discourse organization when determining their score for a given spoken response (Zechner et al., 2009) .",
"cite_spans": [
{
"start": 334,
"end": 346,
"text": "(Page, 1966;",
"ref_id": "BIBREF19"
},
{
"start": 347,
"end": 373,
"text": "Attali and Burstein, 2006;",
"ref_id": "BIBREF0"
},
{
"start": 374,
"end": 397,
"text": "Bernstein et al., 2010;",
"ref_id": "BIBREF5"
},
{
"start": 398,
"end": 422,
"text": "Williamson et al., 2012)",
"ref_id": "BIBREF31"
},
{
"start": 907,
"end": 929,
"text": "(Zechner et al., 2009)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Even as assessment companies try their best to ensure high quality of human scores, human raters do not always agree in the scores they assign to a constructed response. One reason is related to properties of the responses themselves: the raters use a unidimensional (holistic) scale to score a multidimensional performance. In this situation different raters may differently weight various aspects of performance (Eckes, 2008) resulting in disagreement. The second reason is related to various imperfections of human raters, e.g., rater fatigue (Ling et al., 2014) , differences between novice and experienced raters (Davis, 2016) , and the effect of raters' linguistic background on their evaluation of the language skill being measured (Carey et al., 2011) .",
"cite_spans": [
{
"start": 414,
"end": 427,
"text": "(Eckes, 2008)",
"ref_id": "BIBREF9"
},
{
"start": 546,
"end": 565,
"text": "(Ling et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 618,
"end": 631,
"text": "(Davis, 2016)",
"ref_id": "BIBREF8"
},
{
"start": 739,
"end": 759,
"text": "(Carey et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To guard against such rater inconsistencies, in addition to extensive rater training and monitoring, responses for high-stakes tests are often scored by multiple raters and scores from responses to multiple test questions are used to compute the final score reported to the test taker and other stakeholders, with different responses scored by different raters (Wang and von Davier, 2014; Penfield, 2016) . As a result, the final score remains highly reliable despite variation in human agreement at the level of the individual question. However, since automated scoring engines are usually trained using response-level scores, any inconsistencies in such scores due to the variety of reasons outlined above may negatively affect the system 1 performance.",
"cite_spans": [
{
"start": 361,
"end": 388,
"text": "(Wang and von Davier, 2014;",
"ref_id": "BIBREF30"
},
{
"start": 389,
"end": 404,
"text": "Penfield, 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To monitor rater performance, testing programs sometimes use previously scored responses that are intermixed with the operational responses. These responses are selected from operational responses to represent exemplar cases of each score level and the scores are further reviewed by multiple raters to ensure their accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we are examining the effect of using such \"exemplar\" responses for scoring model training and evaluation in the context of automated speech scoring. In particular, we aim to address the following research questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. How do automated speech scoring models perform when trained on a corpus with randomly selected responses vs. a corpus with exemplar responses?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. How is performance affected by the choice of evaluation corpus (random response selection vs. exemplar responses)?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our initial hypothesis about research question (1) is that if the size and score distribution for the training corpora are comparable, we would expect to see the scoring model perform better when trained on the exemplar responses since the model is trained on clear-cut examples (less noise in the data). Similarly, as for research question (2), we hypothesize that when evaluating on clear-cut exemplar responses, scoring model performance should be better than in the default case (random selection) since the machine would likely benefit from the same response properties that also result in more consistent and reliable human scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Constructing large corpora of exemplar responses is a very resource-intensive task and therefore little is known about the possible impact of the use of such corpora for training and evaluation of automated scoring models. Our paper uses a very large corpus of spoken responses and an exemplar corpus constructed by experts over the course of multiple years to address this gap and improve our understanding of the effect of training data on the performance of automated scoring models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous studies considered the effect of annotation noise on the performance of various NLP systems (Schwartz et al., 2011; Reidsma and Carletta, 2008; Mart\u00ednez Alonso et al., 2015; Plank et al., 2014) .",
"cite_spans": [
{
"start": 101,
"end": 124,
"text": "(Schwartz et al., 2011;",
"ref_id": "BIBREF25"
},
{
"start": 125,
"end": 152,
"text": "Reidsma and Carletta, 2008;",
"ref_id": "BIBREF24"
},
{
"start": 153,
"end": 182,
"text": "Mart\u00ednez Alonso et al., 2015;",
"ref_id": "BIBREF18"
},
{
"start": 183,
"end": 202,
"text": "Plank et al., 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In a series of papers, Beigman Klebanov and Beigman (2014; studied annotation noise in linguistic data, namely, a situation where some of the data is easy to judge, with clear-cut annotation/classification, whereas some of the data is harder to judge, yielding disagreements among raters.",
"cite_spans": [
{
"start": 31,
"end": 58,
"text": "Klebanov and Beigman (2014;",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "They show that in a binary classification task, the presence of annotation noise (hard to judge cases) in the evaluation data could skew benchmarking, especially in cases of small discrepancies between competing models. They also show that the presence of hard cases in the training data could compromise system performance on easyto-judge test cases, a phenomenon they termed hard case bias. Using data annotated through crowd-sourcing and across five linguistic tasks, Jamison and Gurevych (2015) extended that work and showed that filtering out low-agreement cases improved performance on test data for some of the tasks without having a substantial detrimental effect on the rest of the cases. They also showed that the filtering of low-agreement instances from the training data ceased being effective if the agreement threshold is set too high, which resulted in too little training data.",
"cite_spans": [
{
"start": 471,
"end": 498,
"text": "Jamison and Gurevych (2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In the context of automated scoring, the size of the training set has been shown to have a consistent effect on model performance (Chen, 2012; Heilman and Madnani, 2015; Zesch et al., 2015) . At the same time, a number of studies also considered the possibility of training automated systems on a smaller but well-chosen subset of examples. Horbach et al. (2014) simulated a grading approach where responses are clustered automatically, teachers labeled only one item per cluster, and that label was then propagated to the other items in the cluster. They reported a 90% grading accuracy of their system. Zesch et al. (2015) further applied this approach to selecting responses for training automated scoring models for short answer scoring. They used k-means clustering to identify similar responses and trained their classifier on responses closest to the centroid of each cluster. Note that in their study k corresponded to the number of responses to be annotated, not the score levels. They found that the system trained on such responses did not outperform the system trained on the same number of randomly sampled responses. They also found no improvement when the score was propagated to all responses in the cluster and the resulting scores were used to train the model. However, the performance increased when the training data was limited to 'pure' clusters only, that is clusters that contained responses assigned the same score. This system, trained on a subset of responses selected in this fashion, substantially outperformed the system trained on the same number of randomly sampled responses, and in the case of short responses, performed as well as the system trained on the whole training set.",
"cite_spans": [
{
"start": 130,
"end": 142,
"text": "(Chen, 2012;",
"ref_id": "BIBREF7"
},
{
"start": 143,
"end": 169,
"text": "Heilman and Madnani, 2015;",
"ref_id": "BIBREF10"
},
{
"start": 170,
"end": 189,
"text": "Zesch et al., 2015)",
"ref_id": "BIBREF33"
},
{
"start": 605,
"end": 624,
"text": "Zesch et al. (2015)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "To summarize, previous studies indicate that training NLP systems including automated scoring engines on a selected subset of responses that are either more typical in terms of feature values or easy-to-judge for human annotators may lead to an increase in system performance despite a reduction in the size of the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "While previous studies on automated scoring used automated clustering to identify the exemplars, we further extend this work by using a large corpus of exemplar responses identified by experts in assessment to train and evaluate an automated speech scoring engine. We compare the performance of the models to those trained on a large corpus of randomly sampled responses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Both corpora use real responses submitted to a large-scale assessment of English language proficiency. The test takers whose responses were used in this study gave their consent for use of their responses for research purposes during the original test administration. The responses in both corpora were anonymized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the data",
"sec_num": "3"
},
{
"text": "The main corpus in this study contains responses sampled randomly from spoken responses submitted to the same assessment over the course of several years. We selected responses to 6 different types of questions. Each question was designed to elicit spontaneous speech. For some questions test-takers were expected to use the provided materials (e.g., a reading passage) as the basis for their response, other questions were more general such as \"What is your favorite food and why?\". Depending on the question type, the speakers were given 45 seconds or 1 minute to complete their response. ken responses, 113,949 responses for each question type. For this study, the responses for each question were partitioned randomly into a training (2/3) and evaluation set (1/3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAIN corpus",
"sec_num": "3.1"
},
{
"text": "All responses in the corpus were scored on a scale of 1-4 by human raters. The raters assigned a single holistic score to each response using a scoring rubric that covered three aspects of language proficiency: delivery (pronunciation, fluency), language use (vocabulary, grammar), and content and topical development. Most responses were scored by a single rater, with 8.5% randomly selected responses independently scored by two raters. The average correlation between two human raters for double-scored responses was Pearsons's r = 0.59.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MAIN corpus",
"sec_num": "3.1"
},
{
"text": "The second corpus used in this study contained responses from the same assessment selected for training and monitoring human raters. These responses are expected to be typical examples of the different score levels. They are usually selected from double-scored responses that were assigned the same scores by both raters and then reviewed by multiple experts in human scoring to ensure that the final score is accurate. The corpus only includes responses where all experts agree about the appropriate score. Thus the responses in this corpus have two important characteristics: first, the final score can be considered a true gold standard; second, this final score is not controversial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXEMPLAR responses",
"sec_num": "3.2"
},
{
"text": "The original set of responses had a uniform distribution of human scores. To separate the effect of distribution, in this study we used a subset sampled to match the score distribution in the MAIN corpus. This corpus consisted of 16,527 re-sponses to the same 6 types of questions 1 with on average 2,754 responses per task. This corpus was also randomly partitioned into training and test sets using a 2:1 ratio.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXEMPLAR responses",
"sec_num": "3.2"
},
{
"text": "Since the total number of responses in the EXEMPLAR corpus was much smaller than in the MAIN corpus, we randomly sampled 12,398 responses from the training partition of the MAIN corpus matching the score distributions in the other two corpora. We will use this MAIN* corpus to separate the effect of the nature of the training set (random sample vs. exemplar) from the effect of the size of the training set. Table 1 summarizes main properties of each corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 409,
"end": 417,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "EXEMPLAR responses",
"sec_num": "3.2"
},
{
"text": "All responses were processed using an automated speech recognition system using the Kaldi toolkit (Povey et al., 2011) and the approach described by Tao et al. (2016) . The language model was based on tri-grams. The acoustic models were based on a 5-layer DNN and 13 MFCC-based features. Tao et al. (2016) give further detail about the model training procedure.",
"cite_spans": [
{
"start": 98,
"end": 118,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF23"
},
{
"start": 149,
"end": 166,
"text": "Tao et al. (2016)",
"ref_id": "BIBREF29"
},
{
"start": 288,
"end": 305,
"text": "Tao et al. (2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated scoring engine 4.1 Automated speech recognition",
"sec_num": "4"
},
{
"text": "The ASR system was trained on a proprietary corpus consisting of 800 hours of non-native speech from 8,700 speakers of more than 100 native languages. The speech in the ASR training corpus was elicited using questions similar to the ones considered in this study. There was no overlap of speakers or questions between the ASR training corpus and the corpus used in this paper. We did not additionally adapt the ASR to the speakers or responses in this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated scoring engine 4.1 Automated speech recognition",
"sec_num": "4"
},
{
"text": "To estimate the ASR word error rate (WER), we obtained human transcriptions for 480 responses randomly selected from the evaluation partition. The median WER for these responses was 34%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated scoring engine 4.1 Automated speech recognition",
"sec_num": "4"
},
{
"text": "For each response, we extracted 77 different features which covered two of the three aspects of language proficiency considered by the human raters: delivery (51 features) and language use (22 features). For this study we did not use any features that cover the content of the response.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "Features related to delivery covered general fluency, pronunciation and prosody. Fluency features include general speech rate as well as fea- 1 The actual questions were different across the corpora.",
"cite_spans": [
{
"start": 142,
"end": 143,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "tures that capture pausing patterns in the response such as mean duration of pauses, mean number of words between two pauses, and the ratio of pauses to speech. Pronunciation quality was measured using the average confidence scores and acoustic model scores computed by the ASR system for the words in the 1-best ASR hypothesis. Finally, prosody was evaluated by measuring patterns of variation in time intervals between stressed syllables as well as the number of syllables between adjacent stressed syllables and variation in the durations of vowels and consonants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "Features related to language use covered vocabulary, grammar and some aspects of discourse structure. Vocabulary-related features included average log of the frequency of all content words and a comparison between the response vocabulary and several reference corpora. Grammar was evaluated using CVA-based comparison computed based on part-of-speech tags, a range of features which measured occurrences of various syntactic structures and the language model score of response. Finally, a set of features measured the occurrence of various discourse markers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.2"
},
{
"text": "To ensure that the results are not an artifact of a particular learning algorithms (hereafter referred to as 'learners'), we used 7 different regressors, both linear and non-linear. For the linear models we used OLS Linear Regression, ElasticNet, Linear SVR, and Huber Regressor. Non-linear models included Random Forest Regressor (RF), Gradient Boosting Regressor (GB), and Multi-layer Perceptron regressor (MLP). In the operational scoring engine the coefficients in the linear models are often restricted to allow only positive values (Loukina et al., 2015). We did not apply such a restriction in this study to allow for a comparison between different types of learners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring models",
"sec_num": "4.3"
},
{
"text": "We used the scikit-learn (Pedregosa et al., 2011) implementation of the learners and the RSMTool toolkit for model training and evaluation. The hyper-parameters for non-deterministic models were optimized using a cross-validated search over a grid with mean squared error (MSE) as the objective function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring models",
"sec_num": "4.3"
},
{
"text": "The scoring models were trained on the training partition of each of the three corpora. Separate models were trained for each of the 6 question types for a total of 126 models (3 corpora * 6 ques-tion types * 7 regressors). Each model was then evaluated on the responses to the same task contained in the evaluation partitions of the MAIN and the EXEMPLAR corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring models",
"sec_num": "4.3"
},
{
"text": "We used a linear mixed-effect model (Searle et al., 1992; Snijders and Bosker, 2012) fitted using the statsmodels Python package (Seabold and Perktold, 2010) to identify statistically significant differences among the various models. We used prediction squared error for each response (N =3,124,338) as a dependent variable, response as a random factor, and learner, training set and test set as fixed effects. We included both the main effects of training and test set as well as their interaction and used the Linear Regression and MAIN corpus as the reference categories. The average model performance for each model is shown in Table 2. While the model was fitted using squared prediction error, for ease of interpretation and comparison with other studies, we report Pearson's correlation coefficient in the table and in the body of the paper. Corresponding values of root mean squared error (RMSE) are given in the Appendix. Unless stated otherwise, p < .0001 for all effects is reported as significant.",
"cite_spans": [
{
"start": 36,
"end": 57,
"text": "(Searle et al., 1992;",
"ref_id": null
},
{
"start": 58,
"end": 84,
"text": "Snijders and Bosker, 2012)",
"ref_id": "BIBREF28"
},
{
"start": 129,
"end": 157,
"text": "(Seabold and Perktold, 2010)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The effect of training set, evaluation set and learner",
"sec_num": "5.1"
},
{
"text": "The effect of the choice of learner on model performance was statistically significant but very small. Most of the more complex models resulted in higher prediction error than OLS linear regression. Huber regression (p = 0.007) and MLP regression gave a slight boost in performance. Random Forest and Linear SVR gave the highest prediction error. In all cases the differences in performance were very small: for RF and SVR the difference between these learners and OLS was 0.03%; in other cases the differences were around 0.01%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The effect of training set, evaluation set and learner",
"sec_num": "5.1"
},
{
"text": "The choice of the evaluation set had the strongest effect on the estimates of model performance. The best model trained on the MAIN corpus of randomly selected responses achieved r = 0.66 (MLP) when evaluated on the MAIN corpus. This is consistent with other results reported for similar corpora: cite values between 0.60 and 0.67 depending on the question type and system used. This model achieved substantially higher performance on the EXEM-PLAR corpus with r = 0.80. In other words, the corpus that contained typical responses that could be accurately scored by human raters was also accurately scored by the automated engine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The effect of training set, evaluation set and learner",
"sec_num": "5.1"
},
{
"text": "Disappointingly, we did not see any improvement in performance when the models were trained on the EXEMPLAR corpus: the performance on the MAIN corpus was in fact slightly worse than when the models were trained on the MAIN corpus, with the highest correlation being r = 0.64 (vs. r = 0.66). The performance of these models was also no better than the performance of the models trained on the same amount of randomly sampled responses (MAIN*).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The effect of training set, evaluation set and learner",
"sec_num": "5.1"
},
{
"text": "As expected, models trained on EXEMPLAR responses reached high agreement when evaluated on EXEMPLAR responses (r = 0.79). The performance of this model was also better than the performance of the model trained on MAIN*. That is, training on EXEMPLAR responses gives an advantage over training on the same number of randomly sampled responses when the model is evaluated on EXEMPLAR responses. However, there was no difference between the model trained on the full training set of the MAIN corpus and the model trained on the EXEMPLAR corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The effect of training set, evaluation set and learner",
"sec_num": "5.1"
},
{
"text": "To further evaluate whether training on a larger number of EXEMPLAR responses may have lead to better performance on the MAIN corpus, we re-trained the models using all responses pooled across the different question types. Such an approach has been previously used in other studies in situations where all types of questions are scored based on the same or similar rubrics and the scoring models do not include any questionspecific features (Higgins et al., 2011; Loukina et al., 2015) . A substantial increase in the size of the training set to some extent compensates for loss of information about question-specific patterns. The models were evaluated by question type, as in the rest of this paper.",
"cite_spans": [
{
"start": 441,
"end": 463,
"text": "(Higgins et al., 2011;",
"ref_id": "BIBREF11"
},
{
"start": 464,
"end": 485,
"text": "Loukina et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Size of the training set",
"sec_num": "5.2"
},
{
"text": "To obtain the learning curves for different training sets, we trained all models using training sets of varying sizes from 1000 responses to the full training partition of a given corpus. For each N other than where N is the length of full corpus we trained models 5 times using 5 randomly sampled training sets. Figure 1 shows The comparison between the two curves showed that when models are evaluated on the MAIN corpus, training on EXEMPLAR responses has a small advantage for a very small training set (N =1000). Once the training set is sufficiently large (for our data, N > 4, 000) training on randomly sampled responses leads to a slightly higher performance than training on the same number of EXEMPLAR responses.",
"cite_spans": [],
"ref_spans": [
{
"start": 313,
"end": 327,
"text": "Figure 1 shows",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Size of the training set",
"sec_num": "5.2"
},
{
"text": "At the same time, training on EXEMPLAR responses had a clear advantage when models were evaluated on EXEMPLAR responses, although the difference between the two models decreased with the increase in the size of the training set. Thus, our results are consistent with the phenomenon of hard case bias described in Beigman Klebanov and -training on noisy data leads to somewhat weaker performance on clear-cut cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Size of the training set",
"sec_num": "5.2"
},
{
"text": "To conclude, having a larger set of EXEM-PLAR responses might have slightly increased the performance of the models on EXEMPLAR responses, but it is unlikely that it would have given a performance boost on the MAIN corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Size of the training set",
"sec_num": "5.2"
},
{
"text": "While differences in training data do not seem to yield consistent differences in performance for the various learners, it is still possible that learners create somewhat different representations when trained on MAIN vs. EXEMPLAR, as was the case, for example, in (Beigman Klebanov and Beigman, 2014) . This would, in turn, suggest that the two models could embody different and potentially complementary views of the data, each dealing better with a different subset of the data. It is likewise possible that different learners created usefully different representations. To assess whether this is likely to be a promising direction for further investigation, we compared the predictions generated by different models by computing correlations between the predictions generated by these models. The correlations were very high: the average correlations between predictions generated by different learners trained on the same data sets were r=0.97 (min r=0.92). Average correlation between predictions generated by the same learner trained on different datasets was also r=0.98 (min r=0.95). In other words, different learners trained on different corpora seem to be producing essentially the same predictions; this suggests that model combination strategies are unlikely to be very effective.",
"cite_spans": [
{
"start": 265,
"end": 301,
"text": "(Beigman Klebanov and Beigman, 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "How similar are predictions from different models?",
"sec_num": "5.3"
},
{
"text": "To better understand the source of errors on the MAIN corpus, we conducted qualitative error analysis of 80 responses (20 per score level) with the worst scoring error, based on predictions generated using OLS linear regression. Inconsistencies in human scoring accounted for discrepancies for 25 of these responses. For an additional 18 responses (11 of these with a human score of 4), the ASR hypothesis was flagged as particularly inaccurate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "6"
},
{
"text": "For the remaining responses we observed different patterns at different score levels. At lower score points (1 and 2), responses incorrectly scored by the automated scoring engine often contained individually intelligible words or even small chunks of locally grammatical strings but the response as a whole was incoherent or incomprehensible in terms of content. Out of the 37 re- maining responses, 15 fell into this category, most of them for score 1 (13 responses). These responses were over-scored by the automated scoring engine based on fluency features or grammar features that correctly captured local patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "6"
},
{
"text": "The pattern was reversed at score levels 3 and 4: these responses were clear, intelligible and syntactically well-formed, with content that was tightly targeted to the question. Yet the speech was halting, choppy, slow and contained frequent long pauses. Out of the 22 remaining responses, 9 fell into this category. As a result they were scored lower by the automated scoring engine since such fluency patterns are generally more common of responses at lower score levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "6"
},
{
"text": "Based on the results of our evaluations reported in Table 2 , our initial hypothesis for research question (1) has to be rejected for the MAIN corpus: the results show that there is no observable effect in scoring model performance based on the training set (the large corpus with randomly selected responses (MAIN) or the EXEMPLAR corpus)average prediction error and Pearson r correlations vary only minimally for these two evaluation corpora when using the different training corpora for scoring model building. Training on EXEM-PLAR responses has a small advantage over training on the same number of randomly sampled responses from the MAIN corpus when the models are evaluated on EXEMPLAR responses, but this advantage disappears by using a training corpus with sufficiently large number of randomly sampled responses.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "On the other hand, our initial hypothesis for research question (2) is confirmed, i.e., the system performance increases substantially when evaluating scoring models on the EXEMPLAR corpus vs. the MAIN corpus (r = 0.80 vs. r = 0.66). Additionally, our results also show that all 7 regressors we used to build scoring models perform similarly on our data, which is also borne out by high correlations between scores generated by the different learners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "In short, we can summarize that while the properties of the evaluation set matter substantially, this does not hold for the training set (as long as its size is not too small). On the one hand, this is somewhat disappointing since we would have hoped to obtain better scoring models when using exemplar responses for training; on the other hand, it is encouraging to see how well automated scoring models work (r = 0.80) when evaluated on data where human raters are in agreement about the response scores (true gold standard data). In some sense, making errors on clear-cut cases is a bigger validity problem for a scoring system than making errors on cases where the correct label is somewhat controversial. Evaluation on clear-cut cases thus provides additional information about the performance of a scoring system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "We now consider possible reasons for the lack of substantial improvement in performance on EX-EMPLAR data when trained on EXEMPLAR data vs. a sufficiently large MAIN corpus. Based on , the potential for hard case bias -namely, a situation where the presence of hard cases in the training data compromises performance on \"easy\" test datacould arise when the hard cases have an adversarial placement in the feature space for a particular learning algorithm. For example, they show that the clustering of hard cases in an area that is far from the separation plane creates the potential for hard case bias for a system that is trained through hinge-loss minimization. Our results thus represent good news for the feature set: it is apparently rich enough to not represent data in a way that puts a large cluster of hard cases in an unfortunate location, for a variety of learning algorithms. That said, we do observe that Linear SVR suffers from some hard case bias, as it performs somewhat worse on EXEMPLAR responses when trained on MAIN vs. EXEMPLAR (0.767 vs. 0.782). We also note that hard case bias does emerge for Linear Regression when the amount of noisy training data is relatively small; a larger dataset thus seems important for counteracting the detrimental effect of the presence of hard cases in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "We also performed manual error analysis on a small set of highly discrepant machine and human scores and found that a substantial subset of the data investigated had human rater errors that caused score discrepancies (around 30%). In most other cases, the discrepancies between machine and human scores could be attributed to situations where different sub-constructs of speaking proficiency diverged substantially from each other. For instance, we identified responses with locally correct grammar and reasonable fluency but with no meaningful content. For the latter reason, such responses are scored very low by human raters but somewhat higher by the machine, e.g., based on features related to fluency and local grammatical accuracy. We also found the opposite, i.e., responses with very good content but sub-optimal fluency characteristics. Human raters typically award high scores for such responses if the suboptimal fluency aspects do not interfere substantially with intelligibility of the response, but the machine scores are lower based on the sub-optimal performance in the fluency domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "For both scenarios, it is important to mention that our scoring models do not contain any features related to content or discourse; developing and adding such features to the automated speech scoring system is an important goal for future work to remediate the score discrepancy in these situations, in addition to the overall goal of providing a comprehensive coverage of the speaking construct in an automated speech scoring system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "In this study, we compared the effect of using two different corpora of scored spoken responses for training and evaluation of automated scoring models built using seven different regressor machine learning systems. The MAIN corpus contained a large set of randomly selected responses from an English language assessment. The EXEM-PLAR corpus contained responses where multiple human raters had agreed on the scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Our main findings were that while the choice of training corpus has no substantial effect on scoring model performance, as long as the noisier training set is sufficiently large, the reverse is true for the choice of evaluation corpus: human-machine score correlations were as high as r = 0.80 for the EXEMPLAR corpus, no matter what training corpus was used to build the model or what regressor machine learning system was used. This compares to r = 0.65 when using the MAIN corpus for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Unfortunately, contrary to our initial assumptions, it is not possible to achieve improvement in performance by simply training the model on the EXEMPLAR corpus, since the model performance in our experiments was only minimally dependent on the training corpus. While we observed that the number of responses necessary to achieve optimal performance is higher when the model is trained on the randomly-selected responses from the MAIN corpus than on the EXEMPLAR corpus, the practical demands of collecting the EXEM-PLAR corpus of such quality as used in this study in many real-life situations are likely to outweigh the cost of collecting a larger set of slightly more 'noisy' data, especially considering a very limited gain in performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Furthermore, we observed effects of differential profiles of responses in terms of various speaking proficiency sub-constructs: e.g., for responses with low human scores where the content is less well rendered than fluency, machine scores may be inflated; the reverse holds for responses with high human scores where the content is very well rendered but where machine scores can be lower due to lack of fluency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "One main goal for future work derived from our results and the associated error analysis is that features capturing content aspects of the response need to be developed and integrated into the automated speech scoring system to yield a more comprehensive construct coverage and to mitigate the observed effects of responses that exhibit differential performance across various speech subconstructs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
}
],
"back_matter": [
{
"text": "We thank Pamela Mollaun for discussing with us many aspects of this work and for helping us obtain the exemplar responses; Matt Mulholland for extracting the features for various corpora used in this study; Keelan Evanini, Su-Youn Yoon, Larry Davis and three anonymous BEA reviewers for their comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": " Table 4 : The values for the learning curves presented in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 4",
"ref_id": null
},
{
"start": 59,
"end": 67,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automated essay scoring with e-rater R v.2",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Attali",
"suffix": ""
},
{
"first": "Jill",
"middle": [],
"last": "Burstein",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Technology, Learning, and Assessment",
"volume": "4",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-rater R v.2. Journal of Technol- ogy, Learning, and Assessment 4(3). https: //ejournals.bc.edu/ojs/index.php/ jtla/article/view/1650/1492.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning with annotation noise",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Beigman",
"suffix": ""
},
{
"first": "Beata",
"middle": [],
"last": "Beigman Klebanov",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP -ACL-IJCNLP '09. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/1687878.1687919"
]
},
"num": null,
"urls": [],
"raw_text": "Eyal Beigman and Beata Beigman Klebanov. 2009. Learning with annotation noise. In Proceedings of the Joint Conference of the 47th Annual Meet- ing of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP -ACL-IJCNLP '09. Association for Com- putational Linguistics, Morristown, NJ, USA, Au- gust, page 280. https://doi.org/10.3115/ 1687878.1687919.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "From Annotator Agreement to Noise Models",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Beigman",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "4",
"pages": "495--503",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov and Eyal Beigman. 2009. From Annotator Agreement to Noise Mod- els. Computational Linguistics 35(4):495-503.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Difficult Cases: From Data to Learning, and Back",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Beigman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "390--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov and Eyal Beigman. 2014. Difficult Cases: From Data to Learning, and Back. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Bal- timore, Maryland, pages 390-396. http:// aclweb.org/anthology/P14-2064.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fluency and Structural Complexity as Predictors of L2 Oral Proficiency",
"authors": [
{
"first": "Jared",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Masanori",
"middle": [],
"last": "Suzuki",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Interspeech 2010",
"volume": "",
"issue": "",
"pages": "1241--1244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jared Bernstein, Jian Cheng, and Masanori Suzuki. 2010. Fluency and Structural Com- plexity as Predictors of L2 Oral Proficiency. Proceedings of Interspeech 2010, Makuhari, Chiba, Japan pages 1241-1244. https: //www.isca-speech.org/archive/ interspeech_2010/i10_1241.html.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Does a Rater's Familiarity with a Candidate's Pronunciation Affect the Rating in Oral Proficiency Interviews?",
"authors": [
{
"first": "M",
"middle": [
"D"
],
"last": "Carey",
"suffix": ""
},
{
"first": "R",
"middle": [
"H"
],
"last": "Mannell",
"suffix": ""
},
{
"first": "P",
"middle": [
"K"
],
"last": "Dunn",
"suffix": ""
}
],
"year": 2011,
"venue": "Language Testing",
"volume": "28",
"issue": "2",
"pages": "201--219",
"other_ids": {
"DOI": [
"10.1177/0265532210393704"
]
},
"num": null,
"urls": [],
"raw_text": "M. D. Carey, R. H. Mannell, and P. K. Dunn. 2011. Does a Rater's Familiarity with a Candidate's Pronunciation Affect the Rating in Oral Proficiency Interviews? Language Test- ing 28(2):201-219. https://doi.org/10. 1177/0265532210393704.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Utilizing cumulative logit models and human computation on automated speech assessment",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 7th Workshop on the Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "73--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Chen. 2012. Utilizing cumulative logit models and human computation on automated speech as- sessment. In Proceedings of the 7th Workshop on the Innovative Use of NLP for Building Educational Ap- plications. pages 73-79. http://dl.acm.org/ citation.cfm?id=2390393.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The influence of training and experience on rater performance in scoring spoken language",
"authors": [
{
"first": "Larry",
"middle": [],
"last": "Davis",
"suffix": ""
}
],
"year": 2016,
"venue": "Language Testing",
"volume": "33",
"issue": "1",
"pages": "117--135",
"other_ids": {
"DOI": [
"10.1177/0265532215582282"
]
},
"num": null,
"urls": [],
"raw_text": "Larry Davis. 2016. The influence of training and expe- rience on rater performance in scoring spoken lan- guage. Language Testing 33(1):117-135. https: //doi.org/10.1177/0265532215582282.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Rater types in writing performance assessments: A classification approach to rater variability",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Eckes",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "25",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1177/0265532207086780"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Eckes. 2008. Rater types in writing per- formance assessments: A classification approach to rater variability, volume 25. https://doi. org/10.1177/0265532207086780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The impact of training data on automated short answer scoring performance",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, BEA@NAACL-HLT 2015",
"volume": "",
"issue": "",
"pages": "81--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman and Nitin Madnani. 2015. The im- pact of training data on automated short answer scoring performance. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Build- ing Educational Applications, BEA@NAACL-HLT 2015, June 4, 2015, Denver, Colorado, USA. pages 81-85. http://aclweb.org/anthology/ W/W15/W15-0610.pdf.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A three-stage approach to the automated scoring of spontaneous spoken responses",
"authors": [
{
"first": "Derrick",
"middle": [],
"last": "Higgins",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Zechner",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Williamson",
"suffix": ""
}
],
"year": 2011,
"venue": "Computer Speech & Language",
"volume": "25",
"issue": "2",
"pages": "282--306",
"other_ids": {
"DOI": [
"10.1016/j.csl.2010.06.001"
]
},
"num": null,
"urls": [],
"raw_text": "Derrick Higgins, Xiaoming Xi, Klaus Zechner, and David Williamson. 2011. A three-stage approach to the automated scoring of spontaneous spoken re- sponses. Computer Speech & Language 25(2):282- 306. https://doi.org/10.1016/j.csl. 2010.06.001.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Finding a Tradeoff between Accuracy and Rater's Workload in Grading Clustered Short Answers",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Horbach",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Wolska",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14",
"volume": "",
"issue": "",
"pages": "588--595",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Horbach, Alexis Palmer, and Magdalena Wolska. 2014. Finding a Tradeoff between Accu- racy and Rater's Workload in Grading Clustered Short Answers. Proceedings of the Ninth Inter- national Conference on Language Resources and Evaluation (LREC'14) pages 588-595. http: //www.lrec-conf.org/proceedings/ lrec2014/pdf/887_Paper.pdf.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Noise or additional information? Leveraging crowdsource annotation item agreement for natural language tasks",
"authors": [
{
"first": "Emily",
"middle": [
"K"
],
"last": "Jamison",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP 2015. Association for Computational Linguistics, Lisbon, Portugal",
"volume": "",
"issue": "",
"pages": "291--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily K. Jamison and Iryna Gurevych. 2015. Noise or additional information? Leveraging crowdsource annotation item agreement for natural language tasks. In Proceedings of EMNLP 2015. Associ- ation for Computational Linguistics, Lisbon, Por- tugal, pages 291-297. http://aclweb.org/ anthology/D15-1035.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Study on the Impact of Fatigue on Human Raters when Scoring Speaking Responses",
"authors": [
{
"first": "G",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mollaun",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Xi",
"suffix": ""
}
],
"year": 2014,
"venue": "Language Testing",
"volume": "31",
"issue": "",
"pages": "479--499",
"other_ids": {
"DOI": [
"10.1177/0265532214530699"
]
},
"num": null,
"urls": [],
"raw_text": "G. Ling, P. Mollaun, and X. Xi. 2014. A Study on the Impact of Fatigue on Human Raters when Scoring Speaking Responses. Language Testing 31:479-499. https://doi.org/10.1177/ 0265532214530699.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Speech-and Text-driven Features for Automated Scoring of English Speaking Tasks",
"authors": [
{
"first": "Anastassia",
"middle": [],
"last": "Loukina",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Speech-Centric Natural Language Processing",
"volume": "",
"issue": "",
"pages": "67--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anastassia Loukina, Nitin Madnani, and Aoife Cahill. 2017. Speech-and Text-driven Features for Au- tomated Scoring of English Speaking Tasks. In Proceedings of the First Workshop on Speech- Centric Natural Language Processing. Association for Computational Linguistics, Copenhagen, Den- mark., pages 67-77. http://www.aclweb. org/anthology/W17-4609.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Feature selection for automated speech scoring",
"authors": [
{
"first": "Anastassia",
"middle": [],
"last": "Loukina",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Zechner",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "12--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anastassia Loukina, Klaus Zechner, Lei Chen, and Michael Heilman. 2015. Feature selection for au- tomated speech scoring. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications. pages 12-19. http:// www.aclweb.org/anthology/W15-0602.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Building Better Open-Source Tools to Support Fairness in Automated Scoring",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Anastassia",
"middle": [],
"last": "Loukina",
"suffix": ""
},
{
"first": "Alina",
"middle": [
"Von"
],
"last": "Davier",
"suffix": ""
},
{
"first": "Jill",
"middle": [],
"last": "Burstein",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on ethics in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "41--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani, Anastassia Loukina, Alina Von Davier, Jill Burstein, and Aoife Cahill. 2017. Building Better Open-Source Tools to Support Fairness in Automated Scoring. In Proceedings of the First Workshop on ethics in Natural Language Process- ing, Valencia, Spain, April 4th, 2017. Association for Computational Linguistics, Valencia, pages 41- 52. http://www.aclweb.org/anthology/ W17-1605.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning to parse with IAA-weighted loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "H\u00e9ctor Mart\u00ednez Alonso",
"suffix": ""
},
{
"first": "Arne",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Skjaerholt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1357--1361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H\u00e9ctor Mart\u00ednez Alonso, Barbara Plank, Arne Skjaerholt, and Anders S\u00f8gaard. 2015. Learning to parse with IAA-weighted loss. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 1357-1361. http://www.aclweb.org/ anthology/N15-1152.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Grading Essays by Computer",
"authors": [
{
"first": "Ellis",
"middle": [
"B"
],
"last": "Page",
"suffix": ""
}
],
"year": 1966,
"venue": "The Phi Delta Kappan",
"volume": "47",
"issue": "5",
"pages": "238--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellis B. Page. 1966. The Imminence of ... Grad- ing Essays by Computer. The Phi Delta Kappan 47(5):238-243.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learn- ing in Python. Journal of Machine Learning Research 12:2825-2830. http://www.jmlr. org/papers/v12/pedregosa11a.html.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Fairness in Test Scoring",
"authors": [
{
"first": "D",
"middle": [],
"last": "Randall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Penfield",
"suffix": ""
}
],
"year": 2016,
"venue": "Assessment and Measurement",
"volume": "",
"issue": "",
"pages": "55--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Randall D. Penfield. 2016. Fairness in Test Scoring. In Neil J. Dorans and Linda L. Cook, editors, Fair- ness in Educational Assessment and Measurement, Routledge, pages 55-76.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning part-of-speech taggers with inter-annotator agreement loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "742--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In Proceedings of the 14th Con- ference of the European Chapter of the Associa- tion for Computational Linguistics. Association for Computational Linguistics, Gothenburg, Sweden, pages 742-751. http://www.aclweb.org/ anthology/E14-1078.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The Kaldi speech recognition toolkit",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Arnab",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "Gilles",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "Nagendra",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Schwarz",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In Proceedings of the Workshop on Automatic Speech Recognition and Understanding.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Reliability Measurement without Limits",
"authors": [
{
"first": "Dennis",
"middle": [],
"last": "Reidsma",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "3",
"pages": "319--326",
"other_ids": {
"DOI": [
"10.1162/coli.2008.34.3.319"
]
},
"num": null,
"urls": [],
"raw_text": "Dennis Reidsma and Jean Carletta. 2008. Reliabil- ity Measurement without Limits. Computational Linguistics 34(3):319-326. https://doi.org/ 10.1162/coli.2008.34.3.319.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Neutralizing linguistically problematic annotations in unsupervised dependency parsing evaluation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "663--672",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Schwartz, Omri Abend, Roi Reichart, and Ari Rappoport. 2011. Neutralizing linguistically prob- lematic annotations in unsupervised dependency parsing evaluation. In Proceedings of the 49th Annual Meeting of the Association for Computa- tional Linguistics: Human Language Technologies -Volume 1. Association for Computational Linguis- tics, Stroudsburg, PA, USA, HLT '11, pages 663- 672. http://dl.acm.org/citation.cfm? id=2002472.2002557.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Statsmodels: Econometric and statistical modeling with Python",
"authors": [
{
"first": "Skipper",
"middle": [],
"last": "Seabold",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Perktold",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Python in Science Conference",
"volume": "",
"issue": "",
"pages": "57--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Skipper Seabold and Josef Perktold. 2010. Statsmodels: Econometric and statistical mod- eling with Python. In Proceedings of the Python in Science Conference. pages 57- 61. https://conference.scipy.org/ proceedings/scipy2010/seabold.html.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Multilevel Analysis",
"authors": [
{
"first": "A",
"middle": [
"B"
],
"last": "Tom",
"suffix": ""
},
{
"first": "Roel",
"middle": [
"J"
],
"last": "Snijders",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bosker",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom A.B. Snijders and Roel J. Bosker. 2012. Multi- level Analysis. Sage, London, 2nd edition.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Exploring deep learning architectures for automatically grading nonnative spontaneous speech",
"authors": [
{
"first": "Jidong",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Shabnam",
"middle": [],
"last": "Ghaffarzadegan",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Zechner",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "6140--6144",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2016.7472857"
]
},
"num": null,
"urls": [],
"raw_text": "Jidong Tao, Shabnam Ghaffarzadegan, Lei Chen, and Klaus Zechner. 2016. Exploring deep learn- ing architectures for automatically grading non- native spontaneous speech. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pages 6140-6144. https://doi.org/10.1109/ ICASSP.2016.7472857.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Monitoring of scoring using the e-rater automated scoring system and human raters on a writing test",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Alina",
"middle": [],
"last": "Von Davier",
"suffix": ""
}
],
"year": 2014,
"venue": "ETS Research Report Series",
"volume": "2014",
"issue": "1",
"pages": "1--21",
"other_ids": {
"DOI": [
"10.1002/ets2.12005"
]
},
"num": null,
"urls": [],
"raw_text": "Zhen Wang and Alina von Davier. 2014. Monitor- ing of scoring using the e-rater automated scoring system and human raters on a writing test. ETS Research Report Series 2014(1):1-21. https:// doi.org/10.1002/ets2.12005.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A Framework for Evaluation and Use of Automated Scoring",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Williamson",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "F",
"middle": [
"Jay"
],
"last": "Breyer",
"suffix": ""
}
],
"year": 2012,
"venue": "Educational Measurement: Issues and Practice",
"volume": "31",
"issue": "1",
"pages": "2--13",
"other_ids": {
"DOI": [
"10.1111/j.1745-3992.2011.00223.x"
]
},
"num": null,
"urls": [],
"raw_text": "David M. Williamson, Xiaoming Xi, and F. Jay Breyer. 2012. A Framework for Evaluation and Use of Au- tomated Scoring. Educational Measurement: Issues and Practice 31(1):2-13. https://doi.org/ 10.1111/j.1745-3992.2011.00223.x.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Automatic scoring of non-native spontaneous speech in tests of spoken English",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Zechner",
"suffix": ""
},
{
"first": "Derrick",
"middle": [],
"last": "Higgins",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Williamson",
"suffix": ""
}
],
"year": 2009,
"venue": "Speech Communication",
"volume": "51",
"issue": "10",
"pages": "883--895",
"other_ids": {
"DOI": [
"10.1016/j.specom.2009.04.009"
]
},
"num": null,
"urls": [],
"raw_text": "Klaus Zechner, Derrick Higgins, Xiaoming Xi, and David M. Williamson. 2009. Automatic scoring of non-native spontaneous speech in tests of spoken English. Speech Communica- tion 51(10):883-895. https://doi.org/10. 1016/j.specom.2009.04.009.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Reducing Annotation Efforts in Supervised Short Answer Scoring",
"authors": [
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "124--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Torsten Zesch, Michael Heilman, and Aoife Cahill. 2015. Reducing Annotation Efforts in Supervised Short Answer Scoring. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Build- ing Educational Applications. Denver, Colorado, pages 124-132. http://www.aclweb.org/ anthology/W15-0615.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Model performance (r) depending on the size of the training set for different combinations of training and test sets. The dotted line indicates the maximum performance obtained on the EXEMPLAR responses to facilitate comparison with the MAIN set. Note that the x-axis is plotted on a logarithmic scale.",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>Corpus</td><td>Total Per model</td></tr><tr><td>MAIN: Train</td><td>464,664 77,444</td></tr><tr><td>MAIN: Test</td><td>219,030 36,505</td></tr><tr><td>MAIN* : Train</td><td>12,398 2,066</td></tr><tr><td>EXEMPLAR:Train</td><td>12,390 2,065</td></tr><tr><td>EXEMPLAR:Test</td><td>4,137 689</td></tr></table>",
"html": null,
"text": "The corpus consisted of 683,694 spo-",
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Characteristics of the corpora used in this study. The table shows the total number of responses in each partition across all 6 question types and the average number of responses used to train/evaluate the model for each question type.",
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>tion sets (see Appendix for table with numerical</td></tr><tr><td>values). All models were trained using OLS linear</td></tr><tr><td>regression.</td></tr></table>",
"html": null,
"text": "Average performance (Pearsons's r) across 6 question types from the two corpora in these studies using different combinations of learners and training sets.",
"num": null
}
}
}
}