ACL-OCL / Base_JSON /prefixR /json /ranlp /2021.ranlp-1.134.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:53:33.157846Z"
},
"title": "Word Discriminations for Vocabulary Inventory Prediction",
"authors": [
{
"first": "Frankie",
"middle": [],
"last": "Robertson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Jyv\u00e4skyl\u00e4",
"location": {}
},
"email": "frankie@robertson.name"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The aim of vocabulary inventory prediction is to predict a learner's whole vocabulary based on a limited sample of query words. This paper approaches the problem starting from the 2parameter Item Response Theory (IRT) model, giving each word in the vocabulary a difficulty and discrimination parameter. The discrimination parameter is evaluated on the sub-problem of question item selection, familiar from the fields of Computerised Adaptive Testing (CAT) and active learning. Next, the effect of the discrimination parameter on prediction performance is examined, both in a binary classification setting, and in an information retrieval setting. Performance is compared with baselines based on word frequency. A number of different generalisation scenarios are examined, including generalising word difficulty and discrimination using word embeddings with a predictor network and testing on out-of-dataset data.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The aim of vocabulary inventory prediction is to predict a learner's whole vocabulary based on a limited sample of query words. This paper approaches the problem starting from the 2parameter Item Response Theory (IRT) model, giving each word in the vocabulary a difficulty and discrimination parameter. The discrimination parameter is evaluated on the sub-problem of question item selection, familiar from the fields of Computerised Adaptive Testing (CAT) and active learning. Next, the effect of the discrimination parameter on prediction performance is examined, both in a binary classification setting, and in an information retrieval setting. Performance is compared with baselines based on word frequency. A number of different generalisation scenarios are examined, including generalising word difficulty and discrimination using word embeddings with a predictor network and testing on out-of-dataset data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Given a small sample of words, how well can we predict whether a learner knows some out-ofsample word? This is the task of vocabulary inventory prediction. A clear motivation for the topic is to enable quicker and more precise placement testing. For example, a 40 word self-assessed word knowledge quiz used as a benchmark in this paper is quick enough that an L2 learner returning to a language learning app after a long break, in which they may have either forgotten a lot or had a lot of extra exposure to their target language, can be placed again quickly without excessive disruption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper addresses the following research questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. What are the empirical differences in performances between difficulty parameters produced by estimation of Item Response Theory (IRT) models and those based on word frequency in terms of their application to vocabulary inventory prediction?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. How well can the IRT parameters of difficulty and discrimination be regressed based on word embeddings?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. Which approaches from the field of Computerised Adaptive Testing (CAT) help to select good items to query? Does the addition of a discrimination parameter help with question selection?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. Does the addition of a discrimination parameter help with the final prediction step?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related Work Milton (2009) refers to the common assumption when quantifying vocabulary acquisition that words are learnt in approximately descending order of frequency as the frequency assumption. It has been used in the field of reading research, for example in estimating vocabulary size, but can also provide a simple baseline for the task of vocabulary inventory prediction. Avdiu et al. (2019) approached the problem through feature engineering, taking frequency profiles of different genres and associating learners with them according to their responses. They used a large section of the data for training, without testing a scenario in which learned data is to be generalised to new learners with less data available, as in this paper Item Response Theory (IRT) (Tatsuoka et al., 1968; Baker, 2001 ) is widely used to determine item difficulties and examinee ability in academic assessments. A key drawback of traditional IRT is that the actual content of the items is ignored. Instead, items are only understood in terms of their responses. This leaves no possibility of generalising item parameters to unseen items. Recent work has begun to generalise difficulty scores based on representations based on items' textual content using deep neural networks. For example Benedetto et al. (2021) first fit an IRT model on questions from a cloud technology certification exam, before training a transformer model to regress the resulting difficulty scores, allowing generalisation to new questions without a pre-testing stage. Ehara (2019) approaches the problem of vocabulary prediction by fitting a Rasch (1960) model, equivalent to a 1-parameter logistic IRT model. The problem was modelled such that an equivalent neural network was constructed which included features based on Glove (Pennington et al., 2014) word embeddings. As with Avdiu et al. (2019) , a single stage of training was performed so that the ability of the learners was learnt simultaneously with the weights of the prediction network. This network did not beat a word frequency and logistic regression baseline. In this paper, a 2-parameter logistic IRT model is fitted as an initial step, before proceeding to generalise these parameters using a word embedding based regressor.",
"cite_spans": [
{
"start": 15,
"end": 28,
"text": "Milton (2009)",
"ref_id": "BIBREF15"
},
{
"start": 381,
"end": 400,
"text": "Avdiu et al. (2019)",
"ref_id": "BIBREF0"
},
{
"start": 772,
"end": 795,
"text": "(Tatsuoka et al., 1968;",
"ref_id": "BIBREF21"
},
{
"start": 796,
"end": 807,
"text": "Baker, 2001",
"ref_id": "BIBREF1"
},
{
"start": 1279,
"end": 1302,
"text": "Benedetto et al. (2021)",
"ref_id": "BIBREF2"
},
{
"start": 1533,
"end": 1545,
"text": "Ehara (2019)",
"ref_id": "BIBREF7"
},
{
"start": 1607,
"end": 1619,
"text": "Rasch (1960)",
"ref_id": "BIBREF18"
},
{
"start": 1794,
"end": 1819,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 1845,
"end": 1864,
"text": "Avdiu et al. (2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Computerised Adaptive Testing (CAT) (Lord, 1977; Wainer, 2000) has not been widely applied to the task of vocabulary inventory estimation. A CAT system selects questions based on a examinee's previous answers in order to converge on an accurate ability estimate faster. Related, but outside of CAT/IRT setting, Ehara et al. (2014a) builds graphs made from a combination of multiple corpora combined and apply label propagation to find a fixed set of queries to in-effect give a more accurate ability estimate than choosing at random. Restricting ourselves to the adaptive setting, the main prior art is the website http://testyourvocab.com/, which uses CAT to estimate vocabulary size based on word frequencies. To the best of the author's knowledge, there is no prior work attempting to quantify how accurate the ability estimates obtained when applying CAT to the problem of vocabulary inventory estimation are.",
"cite_spans": [
{
"start": 36,
"end": 48,
"text": "(Lord, 1977;",
"ref_id": "BIBREF14"
},
{
"start": 49,
"end": 62,
"text": "Wainer, 2000)",
"ref_id": "BIBREF23"
},
{
"start": 311,
"end": 331,
"text": "Ehara et al. (2014a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Three datasets are used in this paper. The first, SVD12K, is due to Ehara et al. (2012) and contains 12 000 words rated on a 5-point scale by 16 learners of English, most of whom have Japanese as their native language. Following Ehara et al. (2014b) , the first learner is discarded due to lower quality data. The learners in SVD12K were all students of the University of Tokyo and we speculate that it is quite possible they have all learnt English for similar purposes, i.e. academic usage, and may have even attended the same English classes.",
"cite_spans": [
{
"start": 68,
"end": 87,
"text": "Ehara et al. (2012)",
"ref_id": "BIBREF10"
},
{
"start": 229,
"end": 249,
"text": "Ehara et al. (2014b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "p i,j a j b j \u03b8 i J I a j \u223c N (1.2, 0.25) b j \u223c N (0, 1) \u03b8 i \u223c N (0, 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "The other two datasets are used as additional test sets, so as to see how well the techniques generalise beyond the potentially rather narrow distribution of SVD12K. Both of the two extra datasets are constructed such that they should be mainly composed of learners with Japanese as their L1, i.e. testing of generalisation beyond learner L1 is not considered here. Ehara (2018) introduce EVKD1, a dataset consisting of responses to a 100 word 4-way multiple choice test given to 100 participants, administered using a Japanese crowdsourcing platform.",
"cite_spans": [
{
"start": 366,
"end": 378,
"text": "Ehara (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Respondents were asked to choose the correct definition of a word given in a context sentence. The final dataset is a section of responses to the website TestYourVocab 1 limited to responses from 2018 by participants who selected their country as \"Japan\". This dataset has a different selection of responses for each person.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Given a matrix of responses r i,j indexed by items i and respondents j, an IRT model predicts latent features of the items and respondents. Respondents are assigned abilities \u03b8 i , while in 2-parameter IRT models, items are assigned difficulties a j and discriminations b j . Typically we predict binomial responses based on an Item Characteristic Curve (ICC) like so:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fitting an IRT Model",
"sec_num": "3.2"
},
{
"text": "ICC j (\u03b8) = 1 + e \u2212a j (\u03b8\u2212b j ) \u22121 P (r i,j |\u03b8 i , a j , b j ) = ICC j (\u03b8 i ) Q(r i,j |\u03b8 i , a j , b j ) = 1 \u2212 ICC j (\u03b8 i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fitting an IRT Model",
"sec_num": "3.2"
},
{
"text": "The data of Ehara et al. (2012) is rated on a 5point scale, suggesting a graded IRT model. A typical formulation may try and learn separate difficulty and discrimination parameters per item-level pair, significantly increasing the number of parameters to be learnt. In order to reduce the amount of data necessary to fit the IRT model, we learn only one difficulty discrimination per item and create fixed global offsets l 1...4 \u2265 0 to create offset difficulties for the thresholds. We then model:",
"cite_spans": [
{
"start": 12,
"end": 31,
"text": "Ehara et al. (2012)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fitting an IRT Model",
"sec_num": "3.2"
},
{
"text": "P (r i,j \u2265 k|\u03b8 i , a j , b j ) = ICC j (\u03b8 i \u2212 4 s=k l s )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fitting an IRT Model",
"sec_num": "3.2"
},
{
"text": "And note that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fitting an IRT Model",
"sec_num": "3.2"
},
{
"text": "P (r i,j = k) = P (r i,j \u2265 k) \u2212 P (r i,j \u2265 k + 1) P (r i,j \u2265 1) = 1 P (r i,j \u2265 6) = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fitting an IRT Model",
"sec_num": "3.2"
},
{
"text": "We estimate the Maximum A Posteriori (MAP) with Stan (Carpenter et al., 2017) . The priors are illustrated alongside Figure 1 . After fitting the model we revert to considering the binomial case by defining P (r i,j ) := P (r i,j = 5).",
"cite_spans": [
{
"start": 53,
"end": 77,
"text": "(Carpenter et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 117,
"end": 125,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Fitting an IRT Model",
"sec_num": "3.2"
},
{
"text": "A simple frequency baseline for difficulty was constructed based on the word frequencies of the wordfreq (Speer et al., 2018) library. The wordfreq library incorporates frequencies from multiple corpora of different registers, ensuring balanced coverage by taking equal contributions from each register after removing outliers. Internally, wordfreq stores log frequencies on an 800 point scale. These are first negated and then standardized according to their mean and standard deviation based on the words in the SVD12K dataset so that they lie in the same range as the IRT difficulties.",
"cite_spans": [
{
"start": 105,
"end": 125,
"text": "(Speer et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Frequencies as a Difficulty Baseline",
"sec_num": "3.3"
},
{
"text": "To the best of the author's knowledge, given good frequency data, this baseline has not yet been significantly surpassed on this task in the setting where there are only a small number responses available from the learner, making it effectively state-of-the-art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Frequencies as a Difficulty Baseline",
"sec_num": "3.3"
},
{
"text": "Linear 2Numberbatch 300Linear 300FullBatchNorm GeLu Figure 2 : The architecture of the IRT item parameter regressor network.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 60,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Frequencies as a Difficulty Baseline",
"sec_num": "3.3"
},
{
"text": "In order to generalise the difficulty and discrimination parameters beyond the words present at IRT model estimation time, a Multi-Layer Perceptron (MLP) was trained as a regressor for both parameters. Words are input to the network as Numberbatch 19.08 (Speer et al., 2017) embeddings. These 300 dimensional embeddings, based on lemmas rather than word forms, are constructed by combining multiple distributional word embeddings with information from the ConceptNet lexical knowledge graph. They were chosen because most vocabulary tests are either based on lemmas or word families rather than word forms, and because they have performed well in previous studies. The architecture shown in Figure 2 was implemented using PyTorch (Paszke et al., 2019). The GeLu activation function (Hendrycks and Gimpel, 2016) and BatchNorm (Ioffe and Szegedy, 2015) are used as non-linearities. Since full batch training is used here, the BatchNorm damping parameter, which is intended to stabilise random variations in minibatches, is not used. The Adam optimizer (Kingma and Ba, 2015) was used with a learning rate of 0.003. Training was performed for 50 iterations and the best iteration on the validation set created by 1:11 validation:train split was chosen.",
"cite_spans": [
{
"start": 254,
"end": 274,
"text": "(Speer et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 782,
"end": 810,
"text": "(Hendrycks and Gimpel, 2016)",
"ref_id": "BIBREF11"
},
{
"start": 825,
"end": 850,
"text": "(Ioffe and Szegedy, 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 691,
"end": 699,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generalising IRT Item Parameters",
"sec_num": "3.4"
},
{
"text": "The aim of Computerised Adaptive Testing (CAT) (Lord, 1977; Wainer, 2000) is to estimate a learner's ability parameter \u03b8 as accurately as possi-ble with as few queries as possible. Key parts of a CAT system are initialisation, next item selection, and \u03b8 estimation. After initialisation, the system repeatedly queries a new item from the learner and re-estimates \u03b8 * until a termination condition. Here, we terminate after having made 40 queries, and always initialise \u03b8 * to be 0.",
"cite_spans": [
{
"start": 47,
"end": 59,
"text": "(Lord, 1977;",
"ref_id": "BIBREF14"
},
{
"start": 60,
"end": 73,
"text": "Wainer, 2000)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computerised Adaptive Testing",
"sec_num": "3.5"
},
{
"text": "Next item selection rules are typically formulated as choosing an next item so as to maximise some measure of merit. Here we consider the maximisation of Fisher information introduced to the field of CAT by Lord (1977) , and denoted as Max-Info. For the 2-parameter logistic IRT model the Fisher information is defined as:",
"cite_spans": [
{
"start": 207,
"end": 218,
"text": "Lord (1977)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computerised Adaptive Testing",
"sec_num": "3.5"
},
{
"text": "I j (\u03b8) = a 2 j ICC a j ,b j (\u03b8)(1 \u2212 ICC a j ,b j (\u03b8))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computerised Adaptive Testing",
"sec_num": "3.5"
},
{
"text": "An alternative next item selection rule is due to Urry (1970) , and denoted as such, and simply picks questions close to the current estimate of \u03b8. Note that this is equivalent to the max entropy heuristic in active learning, which queries the data point about which the current version of the classifier is most uncertain.",
"cite_spans": [
{
"start": 50,
"end": 61,
"text": "Urry (1970)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computerised Adaptive Testing",
"sec_num": "3.5"
},
{
"text": "There are two approaches for estimating \u03b8 * . The first, denoted Full-ICC, starts from a binomial IRT model introduced in Section 3.2 and incomplete response data U = {u j |j \u2208 J, u j \u2208 {0, 1}}. We then obtain \u03b8 * by maximum likelihood estimation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computerised Adaptive Testing",
"sec_num": "3.5"
},
{
"text": "L(\u03b8) = u j \u2208U P (r i,j |\u03b8, a j , b j ) u j \u00d7 Q(r i,j |\u03b8, a j , b j ) (1\u2212u j ) \u03b8 * = arg max \u03b8 L(\u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computerised Adaptive Testing",
"sec_num": "3.5"
},
{
"text": "The second, denoted Difficulty Only, ignores the discriminations of the items, which is equivalent to setting all a j = 1. Substituting the resulting ICC expressions into the likelihood reveals an equivalence with logistic regression. Namely, after fitting a logistic regression model on the responses U , we get a model with coefficient m and intercept c. We then find that \u03b8 * = \u2212c m . In early iterations, there may only be positive or negative responses. In this case we apply the method of Dodd (1990) , which averages the previous theta estimate with either the maximum or minimum item difficulty value depending on the direction in which \u03b8 * would otherwise diverge.",
"cite_spans": [
{
"start": 495,
"end": 506,
"text": "Dodd (1990)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computerised Adaptive Testing",
"sec_num": "3.5"
},
{
"text": "As a non-CAT baseline, there is stratified random selection, denoted Rand. In order to guarantee a reasonable range of item difficulties are asked, strata for the words are created by ordering by frequency and splitting into 5 equal sized strata. The random selection procedure then chooses 40 items randomly, taking equally from each stratum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computerised Adaptive Testing",
"sec_num": "3.5"
},
{
"text": "The catsim Python library (De Rizzo Meneghetti and Aquino Junior, 2017) is used for the implementations of all CAT techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computerised Adaptive Testing",
"sec_num": "3.5"
},
{
"text": "The vocabulary inventory prediction task can be viewed as a binary classification problem. The Receiver Operator Characteristic (ROC) curve plots the recall of the positive class against the recall of the negative class by varying the classifier threshold. Statistics based on ROC curve, such as Area Under ROC (AUROC) enjoy the key advantage of threshold invariance. On the other hand, we typically do have to pick some threshold and for this reason, a metric based on a default threshold of 0.5 is given: Matthews Correlation Coefficient (MCC). The second angle on the problem is that of known and unknown word retrieval. In this case Average Precision (AP) acts as a threshold invariant measure of retrieval performance. We consider AP+ and APfor measuring retrieval performance from the two classes of known and unknown respectively. AUROC does not change significantly based on exact ability estimate of the learner due to its lack of a fixed threshold. Here, we use it only to explore different ways the difficulty parameter can be obtained and the effect of including the discrimination parameter. Being based on a fixed threshold, MCC is highly sensitive to the actual ability estimate, and so it gives a more realistic picture of performance practically. The metrics AP+ and AP-are used to measure an upper bound on the performance on the retrieval tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.6"
},
{
"text": "Intuitively, we can see low values of discrimination as reflecting a degree of uncertainty about a word's true difficulty. The information retrieval perspective is particularly relevant here since the presence of the discrimination parameter means that, for example in unknown word retrieval, words that are highly discriminating but less difficult could be returned earlier than words with low discrimination that are more difficult, potentially improving performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.6"
},
{
"text": "We first evaluate how well the item/word parameters from the IRT model can be regressed with Table 1: Table containing the both the raw Mean Absolute Error (MAE) and the MAE normalised by the true standard deviation of difficulties and discriminations as predicted in two word generalisation scenarios. the chosen architecture. Next we move on to consider how well various CAT approaches can estimate learners' abilities. Finally, the results for the final task of vocabulary inventory prediction are presented, first cross validating on the SVD12K dataset and then training on the whole SVD12K dataset and testing on the extra datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 118,
"text": "Table 1: Table containing",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Four generalisation scenarios are considered across experiments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Gen-None No generalisation; The IRT model is fitted on the same data as the test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Gen-Word Generalising only to new words; 3fold cross validation is performed on words, with the IRT model being fitted on 2 /3 training words, before fitting the MLP on the results to predict the out-of-vocabulary 1 /3 of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Gen-Respondent Generalising only to new learners; 3-fold cross validation is performed on participants, with the IRT model being fitted on 2 /3 participants, from which the item parameters are used as-is on the out-of-sample 1 /3 of participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Gen-Both Generalising to new words and learners; 9-fold cross validation is performed, consisting of the product of 3-fold cross validation on participants with 3-fold cross validation on words. Table 1 gives the results evaluating the performance of the IRT parameter regressor. When looking at the results normalised by true standard deviation, it is clear that the parameter of discrimination is more difficult to predict. The lower error in predicting difficulties in the Gen-Words scenario suggests that the more accurate IRT predictions made with more data do indeed provide an easier target for the network to fit. However, the actual errors are quite close, and the generalisation scenarios tend to give similar results, so for this reason Gen-Words is not considered further in the later results.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We now turn to the matter of how well \u03b8 is estimated using different approaches. The results are shown in Table 2 . For both next item selection methods and \u03b8estimation methods, including the discrimination parameter seemed to decrease performance. Noteworthy is that the best overall score is obtained by difficulty-based CAT for the Gen-Both and Gen-Resp, with this setting in the Gen-Both scenario outperforming the others, showing that the regressed word difficulties perform well for this task. For the Gen-None scenario, including the full ICC when estimating \u03b8 appeared to help. It may be that having non-regressed discrimination values based on responses from more respondents helped in this case.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "\u03b8-estimation",
"sec_num": "4.2"
},
{
"text": "However, since discriminations appear to not be generally useful for finding \u03b8 in any generalisation scenario, they are not used further in the next section and the Urry (1970) next item rule is used together with the difficulty only \u03b8 estimator.",
"cite_spans": [
{
"start": 165,
"end": 176,
"text": "Urry (1970)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u03b8-estimation",
"sec_num": "4.2"
},
{
"text": "We now evaluate the final task of vocabulary inventory prediction. Table 3 shows the results on this task using the metrics introduced in Section 3.6. The experiments compare the use of dif- ficulties from IRT versus the wordfreq baseline, and whether or not the discrimination parameter is used for prediction. The idea behind using the discrimination parameter in prediction is that highly discriminating words may receive more confident scores even when they're further from ability estimate than a nearer lowly discriminating word since the discrimination parameter acts as a measure of certainty of the item's difficulty.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Vocabulary Inventory Prediction",
"sec_num": "4.3"
},
{
"text": "From the results we can see that using difficulties based on word frequencies reduces performance across the board. The inclusion of the discrimination parameter in most cases does not seem to make too much of a change, slightly decreasing performance in the Gen-Both scenario, and making very little difference for Gen-Respondent. Although there is a small improvement in the Gen-None case, this reflects the IRT model's goodness of fit, rather than how well the values generalise. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vocabulary Inventory Prediction",
"sec_num": "4.3"
},
{
"text": "We now turn to a scenario in which all the data from the SVD12K dataset is used for training, equivalent to the Gen-None scenario, but the resulting item parameters are tested on external datasets. We test on the EVKD1 data set and TestYourVocab dataset introduced in Section 3.1. The results are given in Tables 4 & 5. Since the EVKD1 set is a 4-way multiple choice test, we account for correct answers by guessing by using an item response curve with a guessing probability of 0.25, similar to the 3-parameter logistic IRT model:",
"cite_spans": [],
"ref_spans": [
{
"start": 306,
"end": 319,
"text": "Tables 4 & 5.",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Generalising Vocabulary Inventory Prediction",
"sec_num": "4.4"
},
{
"text": "ICC a j ,b j (\u03b8) = 0.25 + 0.75 1 + e \u2212a j (\u03b8 i \u2212b j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalising Vocabulary Inventory Prediction",
"sec_num": "4.4"
},
{
"text": "Since there is a limited number of training words available in these datasets, in these experiments, no CAT is used, and instead the difficulty parameter is estimated based on 40 words taken at regular intervals from the frequency ranked list. There are three generalisation scenarios: Freq, where only frequency data is used; Pred, where only predictions from the generalisation model are used; and Mix, where item parameters are used directly from the IRT model fitted on SVD12K where possible, falling back to predictions when items available in SVD12K. Other variations are as in Section 4.3. For both datasets, frequency based difficulties outperform difficulties estimated from SVD12K, suggesting these do not generalise well to other datasets. The inclusion of the discrimination parameter appears to have a consistent small negative effect across all these experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalising Vocabulary Inventory Prediction",
"sec_num": "4.4"
},
{
"text": "We now summarise and discuss some of the main results of the experiments. Firstly, the discrimination parameter does not appear to help with query item selection, however it remains somewhat inconclusive whether it can help with estimating the learner ability \u03b8 since this was the best configuration in the Gen-None case. It may be that with sufficiently high quality estimates of the discrimination values, using this for \u03b8-estimation would help more. The approach which appeared best overall in this case however, and which was used for later experiments on the SVD12K dataset ignored the discrimination parameter altogether for both steps of the CAT stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The difficulty parameter generalises reasonably well, while the discrimination parameter generalises quite poorly when regressed using a MLP based on Numberbatch representations of the word items. Since item difficulty here is closely related to frequency, it seems quite possible that a lot of the generalisation is happening based on frequency information encoded in the word embeddings. When considering how well both parameters generalised, we should note that only one type of word embedding and regressor was tried, and others may generalise this parameter better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The regressed difficulties perform better than the frequency data on in-dataset data, while performing worse on out-of-dataset data. Given all datasets contained mostly Japanese learners of English, this suggests that both the IRT parameter and the MLP generalising may have over fitted on narrow attributes of the particular cohort of University of Tokyo students making up SVD12K. Conversely we see that that high quality, balanced word frequency data generalises rather well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Usage of the discrimination parameter for vocabulary inventory prediction was largely inconclusive, with some evidence against it. In many cases, it appeared to decrease performance on metrics such as AUROC, however some tasks showed a promising but insignificant boost in AP-.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "It is unclear exactly why the discrimination parameter failed to provide significant improvements in either next-item selection, \u03b8-estimation or vocabulary inventory prediction. It is possible that the amount of response data was not sufficient either in terms of the number of respondents, or in terms of representing a diverse range of abilities, to obtain accurate word discrimination estimates. Apart from simply finding and integrating more vocabulary knowledge data, one direction for future work is trying to find corpus derived measures which correlate with word discrimination, analogously to the negative correlation between word frequency and word difficulty. This would also effectively address the failure to generalise the word discriminations parameter to out of vocabulary words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We hope the methods of evaluating the different sub-tasks of the vocabulary inventory prediction task in the settings demonstrated here can help establish practices for evaluating this task more throughly. We also hope that the framing given here inspires others to tackle the problem in the challenging, but more broadly applicable setting of vocabulary inventory prediction having a small, limited number of queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The code to replicate all experiments is made available at https://github.com/frankier/ vocabirt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Obtained by direct request from the owner of http:// testyourvocab.com/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Predicting learner knowledge of individual words using machine learning",
"authors": [
{
"first": "Drilon",
"middle": [],
"last": "Avdiu",
"suffix": ""
},
{
"first": "Vanessa",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "Kl\u00e1ra Pta\u010dinov\u00e1",
"middle": [],
"last": "Klim\u010d\u00edkov\u00e1",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 8th Workshop on NLP for Computer Assisted Language Learning",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Drilon Avdiu, Vanessa Bui, and Kl\u00e1ra Pta\u010dinov\u00e1 Klim\u010d\u00edkov\u00e1. 2019. Predicting learner knowledge of individual words using machine learning. In Pro- ceedings of the 8th Workshop on NLP for Computer Assisted Language Learning, pages 1-9, Turku, Fin- land. LiU Electronic Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The basics of item response theory",
"authors": [
{
"first": "F",
"middle": [],
"last": "Baker",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Baker. 2001. The basics of item response theory, 2nd edition.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On the application of transformers for estimating the difficulty of multiple-choice questions from text",
"authors": [
{
"first": "Luca",
"middle": [],
"last": "Benedetto",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Aradelli",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Cremonesi",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Cappelli",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Giussani",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Turrin",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "147--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luca Benedetto, Giovanni Aradelli, Paolo Cremonesi, Andrea Cappelli, Andrea Giussani, and Roberto Tur- rin. 2021. On the application of transformers for estimating the difficulty of multiple-choice questions from text. In Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Ap- plications, pages 147-157.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Stan: A probabilistic programming language",
"authors": [
{
"first": "B",
"middle": [],
"last": "Carpenter",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gelman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Goodrich",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Betancourt",
"suffix": ""
},
{
"first": "Marcus",
"middle": [
"A"
],
"last": "Brubaker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Allen",
"middle": [
"B"
],
"last": "Riddell",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Statistical Software",
"volume": "76",
"issue": "",
"pages": "1--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Carpenter, A. Gelman, M. Hoffman, D. Lee, Ben Goodrich, M. Betancourt, Marcus A. Brubaker, J. Guo, P. Li, and Allen B. Riddell. 2017. Stan: A probabilistic programming language. Journal of Statistical Software, 76:1-32.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Application and Simulation of Computerized Adaptive Tests Through the Package catsim",
"authors": [
{
"first": "Rizzo",
"middle": [],
"last": "Douglas De",
"suffix": ""
},
{
"first": "Plinio Thomaz Aquino",
"middle": [],
"last": "Meneghetti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Junior",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.03012"
]
},
"num": null,
"urls": [],
"raw_text": "Douglas De Rizzo Meneghetti and Plinio Thomaz Aquino Junior. 2017. Application and Simulation of Computerized Adaptive Tests Through the Pack- age catsim. arXiv e-prints, page arXiv:1707.03012.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The effect of item selection procedure and stepsize on computerized adaptive attitude measurement using the rating scale model",
"authors": [
{
"first": "B",
"middle": [
"G"
],
"last": "Dodd",
"suffix": ""
}
],
"year": 1990,
"venue": "Applied Psychological Measurement",
"volume": "14",
"issue": "",
"pages": "355--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. G. Dodd. 1990. The effect of item selection proce- dure and stepsize on computerized adaptive attitude measurement using the rating scale model. Applied Psychological Measurement, 14:355 -366.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Building an english vocabulary knowledge dataset of japanese english-as-a-secondlanguage learners using crowdsourcing",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yo Ehara. 2018. Building an english vocabulary knowledge dataset of japanese english-as-a-second- language learners using crowdsourcing. In Proceed- ings of LREC 2018, Miyazaki, Japan.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Neural rasch model: How do word embeddings adjust word difficulty? In PACLING",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yo Ehara. 2019. Neural rasch model: How do word embeddings adjust word difficulty? In PACLING.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Formalizing word sampling for vocabulary prediction as graph-based active learning",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Oiwa",
"suffix": ""
},
{
"first": "Issei",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yo Ehara, Yusuke Miyao, H. Oiwa, Issei Sato, and H. Nakagawa. 2014a. Formalizing word sampling for vocabulary prediction as graph-based active learning. In EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Formalizing word sampling for vocabulary prediction as graph-based active learning",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Hidekazu",
"middle": [],
"last": "Oiwa",
"suffix": ""
},
{
"first": "Issei",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1374--1384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yo Ehara, Yusuke Miyao, Hidekazu Oiwa, Issei Sato, and Hiroshi Nakagawa. 2014b. Formalizing word sampling for vocabulary prediction as graph-based active learning. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1374-1384.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Mining words in the minds of second language learners: Learner-specific word difficulty",
"authors": [
{
"first": "Yo",
"middle": [],
"last": "Ehara",
"suffix": ""
},
{
"first": "Issei",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Hidekazu",
"middle": [],
"last": "Oiwa",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2012,
"venue": "The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yo Ehara, Issei Sato, Hidekazu Oiwa, and Hiroshi Nak- agawa. 2012. Mining words in the minds of sec- ond language learners: Learner-specific word dif- ficulty. In Proceedings of COLING 2012, page 799814, Mumbai, India. The COLING 2012 Orga- nizing Committee.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Gaussian error linear units (gelus)",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv: Learning.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Ioffe and Christian Szegedy. 2015. Batch normaliza- tion: Accelerating deep network training by reducing internal covariate shift. ArXiv, abs/1502.03167.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A broad-range tailored test of verbal ability",
"authors": [
{
"first": "F",
"middle": [],
"last": "Lord",
"suffix": ""
}
],
"year": 1977,
"venue": "Applied Psychological Measurement",
"volume": "1",
"issue": "",
"pages": "100--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Lord. 1977. A broad-range tailored test of verbal ability. Applied Psychological Measurement, 1:100 - 95.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Measuring second language vocabulary acquisition",
"authors": [
{
"first": "James",
"middle": [],
"last": "Milton",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Milton. 2009. Measuring second language vo- cabulary acquisition.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Pytorch: An imperative style, highperformance deep learning library",
"authors": [
{
"first": ",",
"middle": [
"S"
],
"last": "Adam Paszke",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "K\u00f6pf",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Devito",
"suffix": ""
}
],
"year": 2019,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, S. Gross, Francisco Massa, A. Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Z. Lin, N. Gimelshein, L. Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zach DeVito, Mar- tin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high- performance deep learning library. In NeurIPS.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, R. Socher, and Christopher D. Man- ning. 2014. Glove: Global vectors for word represen- tation. In EMNLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Probabilistic models for some intelligence and attainment tests: Danish institute for educational research",
"authors": [
{
"first": "George",
"middle": [],
"last": "Rasch",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Rasch. 1960. Probabilistic models for some intelligence and attainment tests: Danish institute for educational research. Denmark Paedogiska, Copen- hagen.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "ConceptNet 5.5: An open multilingual graph of general knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "4444--4451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An open multilingual graph of gen- eral knowledge. pages 4444-4451.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Luminosoinsight/wordfreq: v2.2",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Jewett",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Nathan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.1443582"
]
},
"num": null,
"urls": [],
"raw_text": "Robyn Speer, Joshua Chin, Andrew Lin, Sara Jewett, and Lance Nathan. 2018. Luminosoinsight/wordfreq: v2.2.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Statistical theories of mental test scores",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tatsuoka",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Lord",
"suffix": ""
},
{
"first": "M",
"middle": [
"R"
],
"last": "Novick",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birnbaum",
"suffix": ""
}
],
"year": 1968,
"venue": "Journal of the American Statistical Association",
"volume": "66",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Tatsuoka, F. Lord, M. R. Novick, and A. Birnbaum. 1968. Statistical theories of mental test scores. Jour- nal of the American Statistical Association, 66:651.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A Monte-Carlo investigation of logistic mental test models",
"authors": [
{
"first": "V",
"middle": [
"W"
],
"last": "Urry",
"suffix": ""
}
],
"year": 1970,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. W. Urry. 1970. A Monte-Carlo investigation of logis- tic mental test models. Ph.D. thesis, Purdue Univer- sity.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Computerized adaptive testing: A primer",
"authors": [
{
"first": "H",
"middle": [],
"last": "Wainer",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Wainer. 2000. Computerized adaptive testing: A primer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Plate diagram showing the Bayesian network corresponding to the 2-parameter logistic IRT model.",
"type_str": "figure"
},
"TABREF2": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Table showing the raw MAE and MAE normalized by standard deviation of estimated difficulties after 40 questions versus true difficulties."
},
"TABREF4": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">Gen Diff.</td><td>Dis.</td><td>ROC MCC</td><td>AP+</td><td>AP-</td></tr><tr><td colspan=\"2\">Freq Freq</td><td>Off</td><td colspan=\"3\">0.690 0.228 0.711 0.650</td></tr><tr><td>Pred</td><td>Resp. Freq</td><td>Off On On</td><td colspan=\"3\">0.658 0.195 0.676 0.614 0.656 0.205 0.676 0.608 0.680 0.228 0.704 0.648</td></tr><tr><td>Mix</td><td>Resp. Freq</td><td>Off On On</td><td colspan=\"3\">0.670 0.261 0.711 0.625 0.677 0.280 0.718 0.625 0.687 0.262 0.715 0.654</td></tr></table>",
"text": "Table showing results on the SVD12K dataset in different generalisation settings given different choices of source difficulty parameter and whether to include the discrimination parameter in predictions."
},
"TABREF5": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Table showing results on the EVKD1 dataset of different choices of source difficulty parameter and whether to include the discrimination parameter in predictions."
},
"TABREF7": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>: Table showing results on the TestYourVocab</td></tr><tr><td>dataset of different choices of source difficulty parame-</td></tr><tr><td>ter and whether to include the discrimination parameter</td></tr><tr><td>in predictions.</td></tr></table>",
"text": ""
}
}
}
}