| { |
| "paper_id": "W18-0102", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T05:27:57.683982Z" |
| }, |
| "title": "Predictive power of word surprisal for reading times is a linear function of language model quality", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Goodkind", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "a.goodkind@u.northwestern.edu" |
| }, |
| { |
| "first": "Klinton", |
| "middle": [], |
| "last": "Bicknell", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "kbicknell@northwestern.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Within human sentence processing, it is known that there are large effects of a word's probability in context on how long it takes to read it. This relationship has been quantified using informationtheoretic surprisal, or the amount of new information conveyed by a word. Here, we compare surprisals derived from a collection of language models derived from n-grams, neural networks, and a combination of both. We show that the models' psychological predictive power improves as a tight linear function of language model linguistic quality. We also show that the size of the effect of surprisal is estimated consistently across all types of language models. These findings point toward surprising robustness of surprisal estimates and suggest that surprisal estimated by low-quality language models are not biased.", |
| "pdf_parse": { |
| "paper_id": "W18-0102", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Within human sentence processing, it is known that there are large effects of a word's probability in context on how long it takes to read it. This relationship has been quantified using informationtheoretic surprisal, or the amount of new information conveyed by a word. Here, we compare surprisals derived from a collection of language models derived from n-grams, neural networks, and a combination of both. We show that the models' psychological predictive power improves as a tight linear function of language model linguistic quality. We also show that the size of the effect of surprisal is estimated consistently across all types of language models. These findings point toward surprising robustness of surprisal estimates and suggest that surprisal estimated by low-quality language models are not biased.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Decades of work studying human sentence processing have demonstrated that a word's probability in context is strongly related to the amount of time it takes to read it. This relationship has been quantified by surprisal theory (Hale, 2001; Levy, 2008) , which states that processing difficulty of a word w in context c is proportional to its information-theoretic surprisal, defined as \u2212 log p(w|c). As a word is more likely to occur in its context, and thus communicates less information (Shannon, 1948) , it is read more quickly.", |
| "cite_spans": [ |
| { |
| "start": 227, |
| "end": 239, |
| "text": "(Hale, 2001;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 240, |
| "end": 251, |
| "text": "Levy, 2008)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 489, |
| "end": 504, |
| "text": "(Shannon, 1948)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One difficulty in testing such effects of a word's probability in context is the need to construct estimates of a word's probability in context. One way of estimating such probabilities is to give human subjects a context, have them guess the next word, and estimate p(w|c) as the proportion of participants who guess word w in context c. This method, called a Cloze task (Taylor, 1953) , may yield reliable estimates for words that have relatively high probabilities in their context, and it has been used in a number of studies of the effects of probabilities in context on reading. However, it is an open question whether these human guess-derived proportions may be biased from objective probabilities in some way (Smith & Levy, 2011) . Problematically for studying surprisal specifically, however, the Cloze task cannot in principle yield reliable estimates of word probabilities in context that are relatively low, say less than 1 in 100, as many word probabilities are, without requiring an extremely large number of participants (Levy, 2008) . Additionally, it is not practical to use the Cloze task to estimate probabilities for large datasets on which surprisal is often studied, for which there can easily be tens of thousands of contexts that would require estimation.", |
| "cite_spans": [ |
| { |
| "start": 372, |
| "end": 386, |
| "text": "(Taylor, 1953)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 718, |
| "end": 738, |
| "text": "(Smith & Levy, 2011)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1037, |
| "end": 1049, |
| "text": "(Levy, 2008)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The alternative is to estimate the probabilities of words in context using computational language models, which are trained on large language corpora to estimate the probabilities of words in context. Many studies of surprisal have used such language models (e.g. Hale, 2001; Levy, 2008; Demberg & Keller, 2008; Mitchell et al., 2010; Monsalve et al., 2012) .", |
| "cite_spans": [ |
| { |
| "start": 264, |
| "end": 275, |
| "text": "Hale, 2001;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 276, |
| "end": 287, |
| "text": "Levy, 2008;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 288, |
| "end": 311, |
| "text": "Demberg & Keller, 2008;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 312, |
| "end": 334, |
| "text": "Mitchell et al., 2010;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 335, |
| "end": 357, |
| "text": "Monsalve et al., 2012)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Unfortunately, however, computational language models are still substantially worse than humans at predicting upcoming words, meaning there is some mismatch between the probabilities p(w|c) being estimated computationally and the implicit probabilities in the brains of readers that humans are using. This situation raises the question of to what extent we can trust results about the effects of surprisal as estimated by such language models. To try to get some information about possible biases that might exist in our results based on language models being worse than humans at predicting upcoming words, poor linguistic quality, we can compare a range of computational language models of varying linguistic quality and see how the estimated effects of surprisal change. If there is a trend in results as the linguistic quality of the language models improves, that would provide evidence that such a trend may be even more present in language models with human-level linguistic quality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Additionally, recent years have seen rapid progress in computational language modeling, enabled by recent advances in neural networks. As a result, the linguistic quality of contemporary language models is far beyond what has been used in previous work studying surprisal. In this paper, we address both these concerns by analyzing how the predictive power of these surprisal estimates, their psychological quality, varies as a function of language model linguistic quality and type.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There has also been substantial interest in the shape of the effects of surprisal on reading times, because of theories that predict it to be linear (Levy, 2008; Smith & Levy, 2013; Bicknell & Levy, 2010) . A secondary goal of this work is to investigate whether the shape of this effect depends on language model quality or type.", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 161, |
| "text": "(Levy, 2008;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 162, |
| "end": 181, |
| "text": "Smith & Levy, 2013;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 182, |
| "end": 204, |
| "text": "Bicknell & Levy, 2010)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In particular, we compare surprisal estimates using a range of language models of varying linguistic qualities and types, from the n-gram models that have been used in most previous work on surprisal to state-of-the-art LSTM and interpolated-LSTM models. We assess the predictive ability and the size and shape of surprisals derived from each language model using generalized additive mixed-effects models (Wood, 2017) fit to a corpus of eye movements in reading.", |
| "cite_spans": [ |
| { |
| "start": 406, |
| "end": 418, |
| "text": "(Wood, 2017)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The plan for the remainder of this paper is as follows. Section 2 introduces the set of language models we compare and establishes the linguistic quality of each. Then, in Section 3 we quantify the ability of surprisals derived from each language model to predict reading times and see the extent to which this changes with language model type and quality, assuming that effects of surprisal on reading times are linear. In Section 4 we do the same but allow surprisal to have non-linear effects, and we additionally use the non-linear models to assess whether there is evidence that the shape of the surprisal effect changes with language model type or quality. Finally, Section 5 concludes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The corpus used for language model estimation was the Google One Billion Word Benchmark (Chelba et al., 2013) , hereafter referred to as the \"1b corpus\". The text data was obtained from news periodicals (similar to the Dundee corpus used for eye-tracking data below). The final corpus contained approximately 0.8 billion words with a vocabulary size of about 800,000.", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 109, |
| "text": "(Chelba et al., 2013)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Although the Dundee Corpus (Kennedy et al., 2003) tokenized entire words with punctuation, our models were trained using separate punctuation as well separated possessives (e.g. Bill's \u2192 [Bill , 's]). Contractions were tokenized into their constituent full-form words, although contractions were counted as a single word when utilizing word count in e.g. perplexity calculations. These calculations can be seen in Table 1 .", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 49, |
| "text": "(Kennedy et al., 2003)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 414, |
| "end": 421, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We compare seven language models of three types: four n-gram models, one LSTM, and two interpolations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model types", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The n-gram, count-based models were calculated using kenlm (Heafield et al., 2013) . kenlm uses Modified Kneser-Ney Smoothing, and is similar in functionality but significantly faster than SRILM (Stolcke et al., 2011) . We calculated 5-grams, 4grams, trigram, bigrams and unigrams. Unigram results were not included in the study, but rather used as a count of word frequency for controlling other models.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 82, |
| "text": "(Heafield et al., 2013)", |
| "ref_id": null |
| }, |
| { |
| "start": 195, |
| "end": 217, |
| "text": "(Stolcke et al., 2011)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "n-gram", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Neural network-based language models were generated from a Recurrent Neural Network (RNN) with Long-Short Term Memory (LSTM). Each word was encoded as a 50-dimensional one-hot vector, This vector was then fed into a sequence model with an LSTM of 50 hidden units. The model did not evaluate character-level sequences, but rather only word-level sequences. The probability of the next word in the sequence was selected from the output layer of the sequence model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LSTM", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "In addition to the LSTM and n-gram models, two interpolated models were also built from the two models with the lowest perplexity on the Dundee Corpus used in this study (see Table 1 ). This was similar to the interpolation method utilized in Jozefowicz et al. (2016) . Similar to Jozefowicz et al. 2016, the present study also found optimal weightings for combining an LSTM model with a smoothed n-gram model. Optimal weighting was operationalized as the blend weights that resulted in the lowest perplexity. Perplexity of the interpolated LSTM+5=gram model was optimal (lowest) when an interpolated model weighted the LSTM probabilities by 0.71, with the 5-gram model weighted by 0.29. In addition to this optimal model, a balanced interpolated model was also constructed using equal weighting of the LSTM and 5-gram probabilities.", |
| "cite_spans": [ |
| { |
| "start": 243, |
| "end": 267, |
| "text": "Jozefowicz et al. (2016)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 175, |
| "end": 182, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Interpolation", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "The Dundee Corpus (see Section 3 for corpus details) was tokenized at the word (rather than token) level with leading, trailing and internal punctuation included, e.g. Bill's, couldn't or exist!. Because the 1b Corpus was tokenized, we were required to break words made up of multiple tokens into their constituent parts. The surprisal (log probability) for each token was matched to the 1b Corpus surprisals. In order to realign the tokens with the Dundee Corpus's words, the log probabilities of each constituent token were added together to form a sum total log probability of the word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dundee corpus surprisals", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Of the approximately 61,000 tokens in the Dundee Corpus, 175 were OOV in the 1b Corpus. These OOV words were removed from the final analysis. In adition, although the 1b Corpus used the sentence-final delimiter </s>, the Dundee Corpus did not. Therefore, while sentence-final delimiters were used in constructing the probabilities of the respective language models, they were also removed from the final analysis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dundee corpus surprisals", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "For each language model, the words' surprisals (log probabilities) were summed and normalized by the word count. The exponent of the inverse of this sum was then calculated. A lower perplexity is indicative of a more accurate language model. For example, a perplexity of 50 means that the model can guess 1 of 50 different options for the model with equal probability. Therefore a lower perplexity means that there are fewer equally likely model options. The perplexity of the seven language models is laid out in optimal interpolated model achieved the lowest perplexity, while the bigram model had the worst (highest) perplexity. It should be noted that the perplexities of both the optimal interpolated model (73) and the LSTM model (113) are worse than the respective models reported in Jozefowicz et al. (2016) and Chelba et al. (2013) . Whereas our best 5-gram model achieves a perplexity of 169 on the Dundee corpus, Jozefowicz et al. 2016achieves a perplexity of 67 on the lm 1b benchmark using a similar model. However, an important distinction is that the perplexities in Table 1 were calculated after all unknown words were excluded. On the other hand, Chelba et al. (2013) used an <UNK> token for words that were OOV on the test portion of the 1b Corpus. This suggests a substantial mismatch between the test benchmark corpus and the Dundee corpus, even though both corpora are sourced from news media. Nonetheless, both perplexity figures could be considered strong, low perplexities.", |
| "cite_spans": [ |
| { |
| "start": 791, |
| "end": 815, |
| "text": "Jozefowicz et al. (2016)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 820, |
| "end": 840, |
| "text": "Chelba et al. (2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1164, |
| "end": 1184, |
| "text": "Chelba et al. (2013)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1082, |
| "end": 1089, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Perplexity", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "In this section we investigate the ability of surprisals derived from each of these seven language models described above to predict reading times in a large corpus of eye movements in reading.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear effects of surprisal", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The eye tracking data for our study came from English portion of the Dundee Corpus (Kennedy et al., 2003) , which recorded the eye-movement data from 10 English-speaking participants read-ing newspaper editorials in The Independent. For this paper specifically, we predict gaze durations for each word, defined to be the sum of all fixations made on a word between the time the word is initially fixed and when the eyes first move off of the word. This measure is only calculated if the word is fixated by that reader prior to any fixation on a later word (i.e., during 'first pass' reading). If the word was not fixated during first pass reading, this is missing data. We used a total of about 436,000 valid gaze durations in the English portion of the Dundee corpus. After performing the exclusions listed below, we were left with a total of 289,726 gaze durations and a vocabulary size of 37,420 word types.", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 105, |
| "text": "(Kennedy et al., 2003)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Eye movement in reading data", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "In line with previous studies of gaze durations in the Dundee corpus (e.g. Smith & Levy, 2013), we excluded:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Eye movement in reading data", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "\u2022 Words preceding punctuation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Eye movement in reading data", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "\u2022 Words with non-alphabetical characters", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Eye movement in reading data", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "\u2022 Words that were presented to participants at the beginning or end of a line of text", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Eye movement in reading data", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "\u2022 Words that were outside the vocabulary of the 1b corpus (and thus the language models) Because our statistical model of the gaze duration of each word also included effects of the surprisal of the preceding word, we also excluded:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Eye movement in reading data", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "\u2022 Words following punctuation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Eye movement in reading data", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "\u2022 Words that followed words with nonalphabetic characters", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Eye movement in reading data", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "\u2022 Words that followed words that were outside the vocabulary of the 1b corpus (and thus the language models)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Eye movement in reading data", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Similar to Smith & Levy (2013), we used generalized additive mixed-effects models (GAMMs) to predict reading times with the mgcv (Wood, 2004) package in R (R Core Team, 2013). We estimated seven GAMMs, one for each language model. Each GAMM modeled gaze duration on a word as a function of two linear surprisal terms: one for the surprisal of the current word and one for the surprisal of the previous word. Each GAMM also included random intercepts for each of the 10 readers and a range of linear and non-linear covariates not of direct interest for the present work, identical to those included by Smith & Levy (2013) . These covariates were:", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 141, |
| "text": "(Wood, 2004)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 601, |
| "end": 620, |
| "text": "Smith & Levy (2013)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical models", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "\u2022 a tensor product interaction between orthographic word length and log-frequency (unigram log probability estimated from the 1b corpus) of the current word", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical models", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "\u2022 a tensor product interaction between orthographic word length and log-frequency of the previous word", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical models", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "\u2022 a spline effect of word number within the text", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical models", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "\u2022 a binary variable of whether or not the previous word had received a fixation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical models", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "We compare the predictive power of different language models for reading times by comparing the log likelihoods across GAMMs that include surprisals derived from different language models. 1 To enable comparison of log likelihoods across models, we change two aspects of mgcv's default GAMM fitting procedure: we use maximum likelihood fitting instead of REML and we use splines with fixed degrees of freedom instead of penalized splines. We set the fixed degrees of freedom for each covariate to be a bit above the estimated degrees of freedom from a GAMM estimated in the default way (which was relatively constant across models). To measure the added predictive power of the two linear surprisal terms in each model, we subtract the models' log likelihood from a model that only includes the covariates, yielding a measure we denote \u2206LogLik. (Note that because these models are in a subset relationship -2 times \u2206LogLik is a Chi-square distributed deviance as in a likelihood ratio test.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "To assess the extent to which this measure of predictive power is related to the language model's linguistic quality, we correlate this \u2206LogLik metric with perplexity. Additionally, since these models with linear effects of surprisal also estimate the coefficient of surprisal for predicting reading times -both for the current word's surprisal and the prior word's -we also assess the correlation between these coefficients and the model's perplexity. To the extent to which there are systematic relationships between these coefficients and the language model's linguistic quality, it may suggest that poor ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "As shown in Figure 1 and Table 2 , there is a monotonic effect of language model quality on predictive power. Better language models (lower perplexity) yield surprisal values that better predict reading times, as seen by increased \u2206LogLik. Indeed, Figure 1 shows a strikingly strong relationship between a language model's linguistic quality (measured by perplexity) and the ability of surprisal values derived from that model to predict reading times (measured by \u2206LogLik). These two values have an R 2 of 0.94.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 20, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 25, |
| "end": 32, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 248, |
| "end": 256, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Log Likelihood", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "However, there is one relatively clear departure from this tight linear relationship. Namely, the large decrease in the perplexity going from the 5-gram model to the LSTM is not reflected in a large jump in \u2206LogLik. Put another way, although there is a clear systematic relationship between language model linguistic quality and \u2206LogLik, there is also some evidence for effects of language model type, such that the LSTM is less useful for predicting reading times than would be expected given its perplexity. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Log Likelihood", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "The effects of two words' surprisal was incorporated into the GAMs: the surprisal of the current word and the surprisal of the previous word. Despite the different models' very different perplexities, the size of the effects of surprisal were estimated very stably across language models. As seen in Figure 2 , all models had surprisal coefficients around 3 (although the LSTM model is again somewhat of a low outlier). There is no clear relationship between the coefficients for the surprisal of the current word and language model quality, with both the best model (optimal interpolation) and the worst model (bigrams) having a value of 3.04.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 300, |
| "end": 308, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Current Word", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "Similar to the results above for the current word, the previous word's surprisal also had an inconsistent effect across models. In other words, the coefficient for the previous word's surprisal (see Table 2 ) bore no clear relationship with relative improvements in language model perplexity.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 199, |
| "end": 207, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Previous Word", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "In addition to the previous set of analyses analyzing the predictive power of linear effects of surprisal on reading times, we conducted another set of analyses allowing for non-linear effects of sur- Table 2 : As the perplexity of a language model increases, its improvement over baseline log likelihood (\u2206LogLik) decreases. The coefficients for both the current and previous words do not bear a consistent relationship with model perplexity. prisal. These models also let us ask whether the shape of the estimated effect of surprisal on reading times varies with language model quality.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 201, |
| "end": 208, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Non-linear effects of surprisal", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The primary methodology was identical to that from the previous analysis, except that instead of including linear effects of current and previous word surprisal in the GAMMs, we included cubic splines (40 d.f.) of current and previous word surprisal. For this non-linear model, since there are not coefficients of current and previous word surprisal, we also investigate the F statistic associated with the strength of each surprisal term predictor. Additionally, to analyze whether the shape of the surprisal effect differs across conditions, we fit additional GAMMs that had the same structure but were estimated in mgcv's usual way (i.e., with splines penalized and REML). These addi- Table 3 : Correlation results for metrics of predictors of linear and non-linear GAMMs tional models were only used for visualization.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 688, |
| "end": 695, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "When allowing for non-linear effects of surprisal, the relationship between linguistic quality and predictive power for reading times becomes even more clear. The relationship between \u2206LogLik and perplexity becomes even stronger (Figure 4) , with an R 2 of 0.98. Further, as seen in Table 4 , while the F statistic for the current word surprisal is inconsistent as model perplexity improves (similar to the coefficients of surprisal in the linear models), the F statistic of the previous word is tightly related to perplexity. As perplexity of a model improves, the F statistic of the previous word improves in lockstep. This suggests that at least in the non-linear models, many of the improvements in predictive ability may come specifically from effects of prior word surprisal. As can be seen in the GAM plots in Figures 5 and 6, there are no large differences in the shape Figure 4 : Improvements in log likelihood for nonlinear models, charted against decreases in perplexity. The blue line is a linear best fit line with a coefficient of -1.66, R 2 = 0.98. Figure 5 : GAM plots on current word using normal estimation of surprisal as language model quality improvesall look roughly linear. If a trend in shape does exist, the highest quality models (interpolation) appear to have the most linear slopes. Additionally, the slope for surprisal of the prior word appears to flatten out for LSTMs for high surprisals. 2", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 229, |
| "end": 239, |
| "text": "(Figure 4)", |
| "ref_id": null |
| }, |
| { |
| "start": 283, |
| "end": 290, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 878, |
| "end": 886, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1064, |
| "end": 1072, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and discussion", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Taking all of the results together, we have shown evidence here for a strong effect of language model linguistic quality on the predictive power of surprisals estimated from that language model for reading times. This effect holds regardless of whether surprisal is modeled as a linear or nonlinear effect. Despite this clear relationship with linguistic quality in terms of predictive power, we also saw remarkable consistency. Across language Figure 6 : GAM plots on previous word using normal estimation models that varied by more than a factor of 4 in perplexity, the size of the effect of surprisal was estimated to be the similar and the shape of the effect of surprisal was estimated to be roughly linear. These results suggest that we can put a reasonable amount of trust in results about surprisal estimated with computational language models, despite the state-of-the-art still being far from human quality.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 445, |
| "end": 453, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "General Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In addition, the way that the language models were composed seems to play a role in its fit to the data. The LSTM-based model does seem to be somewhat of a low-performing outlier. However, when the LSTM model is used with the 5-gram model in interpolation, these yield superior results. Therefore, although a purely LSTM-based model does not predict reading time as well as other models, it provides a good fit for the data. When used in conjunction with a count-based model, this combination provides more accurate predictions of the reading time data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "A number of studies have used the Dundee eyetracking corpus in conjunction with a probabilistic language model. Demberg & Keller (2008) , using less sophisticated linear models, found that surprisal is an accurate measure of processing complexity as measured by eye gaze duration. According to Demberg & Keller (2008) , greater word surprisal invokes higher \"integration costs,\" which accounts for prolonged gaze duration.", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 135, |
| "text": "Demberg & Keller (2008)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 294, |
| "end": 317, |
| "text": "Demberg & Keller (2008)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In a neural network language model, word dependencies can span an arbitrary word distance, i.e. not all dependencies are contingent upon adjacent words or even a neighboring word. For example, ellipsis can span multiple clause boundaries to resolve an anaphoric relationship. For this reason, surprisal that accounts for the hierarchical structure of language has also been studied, to see if taking hierarchy into account can better predict eye gaze duration. Frank & Bod (2011) concludes that including hierarchy information does not better account for variance compared to a sequencebased model. According to their study, hierarchical information does not noticeably affect the generation of expectations of the following word.", |
| "cite_spans": [ |
| { |
| "start": 461, |
| "end": 479, |
| "text": "Frank & Bod (2011)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Fossum & Levy (2012), on the other hand, make various modifications to the models used in Frank & Bod (2011) , adding additional lexical information to the unlexicalized hierarchical models. Fossum & Levy (2012) concludes that hierarchical information, when properly lexicalized, can improve sequence-only lexical models. Similarly, Mitchell et al. (2010) created a model that interpolates syntactic and distributional semantic information, and found that this improved the prediction of eye tracking durations.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 108, |
| "text": "Frank & Bod (2011)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 191, |
| "end": 211, |
| "text": "Fossum & Levy (2012)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 333, |
| "end": 355, |
| "text": "Mitchell et al. (2010)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As this bears on the present study, the LSTM model is able to detect word relationships that span arbitrary distances. While the LSTM model is not explicitly representing hierarchical information, the model does capture long distance information. Our results show that the LSTM model outperforms the purely n-gram models in terms of predictive capabilities. Thus, while we do not need to build hierarchical information explicitly into our model, the long-distance information does improve both linguistic and psychological accuracy. This could point to the conclusion that eye gaze duration is also sensitive to, if not hierarchical information, then information provided at a long distance from the current word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In a similar vein to our results, Monsalve et al. (2012) shows that perplexity of a language model (linguistic accuracy) bears a strong relationship to the log likelihood of a reading time model (psy-chological accuracy). The key differences between this study and ours is that Monsalve et al. (2012) analyzes self-paced reading data rather than eyetracking, and that we use higher-performing stateof-the-art language models.", |
| "cite_spans": [ |
| { |
| "start": 34, |
| "end": 56, |
| "text": "Monsalve et al. (2012)", |
| "ref_id": null |
| }, |
| { |
| "start": 278, |
| "end": 300, |
| "text": "Monsalve et al. (2012)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Finally, the present study can, in many respects, be viewed as a follow-up to Smith & Levy (2013) . (Smith & Levy, 2013) measured the shape of the surprisal curve, similar to our experiment in Section 4; however, the present study demonstrates that the the effect of surprisal is still linear even with much more (linguistically and psychologically) accurate language models.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 97, |
| "text": "Smith & Levy (2013)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 100, |
| "end": 120, |
| "text": "(Smith & Levy, 2013)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As many studies have noted (Monsalve et al., 2012; Frank et al., 2013) , a corpus such as the Dundee corpus, collected from newspapers, often requires a great deal of global, extra-sentential context. Therefore, when processing a given sentence, the reader must also take into account information provided many sentences prior, or even not provided in the document at all. This limitation could impact the results reported herein.", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 50, |
| "text": "(Monsalve et al., 2012;", |
| "ref_id": null |
| }, |
| { |
| "start": 51, |
| "end": 70, |
| "text": "Frank et al., 2013)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Despite possible limitations, the results above provide consistent evidence that improving the linguistic accuracy of language models will improve the models' ability to make psychological predictions. This underscores the importance of understanding language structure in order to better understand cognitive processes such as eye gaze duration.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Technically, these models include log 10 probabilities, which must be multiplied by -1 to get a surprisal, and also converted from bans to bits.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "This approach was followed rather than performing a statistical model comparison testing for non-linearity because our GAMM models lacked by-word random slopes. Because the model lacks these parameters, we would expect the model to capture variance across word tokens in the corpus by bending the curve away from linearity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We wish to thank Tal Linzen for providing code for interfacing with Google's lm 1b LSTM language model. This research was supported by NSF Award 1734217 (Bicknell) ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A rational model of eye movement control in reading", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Bicknell", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th annual meeting of the association for computational linguistics (acl)", |
| "volume": "", |
| "issue": "", |
| "pages": "1168--1178", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bicknell, K., & Levy, R. (2010). A rational model of eye movement control in reading. In J. Ha- jivc, S. Carberry, S. Clark, & J. Nivre (Eds.), Pro- ceedings of the 48th annual meeting of the associ- ation for computational linguistics (acl) (pp. 1168- 1178). Uppsala, Sweden: Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "One billion word benchmark for measuring progress in statistical language modeling", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Chelba", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Ge", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Brants", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Robinson", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1312.3005" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chelba, C., Mikolov, T., Schuster, M., Ge, Q., Brants, T., Koehn, P., & Robinson, T. (2013). One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Data from eyetracking corpora as evidence for theories of syntactic processing complexity", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Demberg", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Keller", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Cognition", |
| "volume": "109", |
| "issue": "2", |
| "pages": "193--210", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Demberg, V., & Keller, F. (2008). Data from eye- tracking corpora as evidence for theories of syntac- tic processing complexity. Cognition, 109(2), 193- 210.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Sequential vs. hierarchical syntactic models of human incremental sentence processing", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Fossum", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 3rd workshop on cognitive modeling and computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "61--69", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fossum, V., & Levy, R. (2012). Sequential vs. hierar- chical syntactic models of human incremental sen- tence processing. In Proceedings of the 3rd work- shop on cognitive modeling and computational lin- guistics (pp. 61-69).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Insensitivity of the human sentence-processing system to hierarchical structure", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "L" |
| ], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Psychological science", |
| "volume": "22", |
| "issue": "6", |
| "pages": "829--834", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frank, S. L., & Bod, R. (2011). Insensitivity of the human sentence-processing system to hierarchical structure. Psychological science, 22(6), 829-834.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Reading time data for evaluating broad-coverage models of english sentence processing", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "L" |
| ], |
| "last": "Frank", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [ |
| "F" |
| ], |
| "last": "Monsalve", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Thompson", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Vigliocco", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Behavior Research Methods", |
| "volume": "45", |
| "issue": "4", |
| "pages": "1182--1190", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frank, S. L., Monsalve, I. F., Thompson, R. L., & Vigliocco, G. (2013). Reading time data for eval- uating broad-coverage models of english sentence processing. Behavior Research Methods, 45(4), 1182-1190.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A probabilistic Earley parser as a psycholinguistic model", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Hale", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the second meeting of the north american chapter of the association for computational linguistics on language technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hale, J. (2001). A probabilistic Earley parser as a psy- cholinguistic model. In Proceedings of the second meeting of the north american chapter of the as- sociation for computational linguistics on language technologies (pp. 1-8).", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Scalable modified Kneser-Ney language model estimation", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the 51st annual meeting of the association for computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "690--696", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scalable modi- fied Kneser-Ney language model estimation. In Proceedings of the 51st annual meeting of the association for computational linguistics (pp. 690-696).", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Exploring the limits of language modeling", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Jozefowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1602.02410" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jozefowicz, R., Vinyals, O., Schuster, M., Shazeer, N., & Wu, Y. (2016). Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "The dundee corpus", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kennedy", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Pynte", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 12th european conference on eye movement", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kennedy, A., Hill, R., & Pynte, J. (2003). The dundee corpus. In Proceedings of the 12th european confer- ence on eye movement.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Expectation-based syntactic comprehension", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Cognition", |
| "volume": "106", |
| "issue": "3", |
| "pages": "1126--1177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Levy, R. (2008). Expectation-based syntactic compre- hension. Cognition, 106(3), 1126-1177.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Syntactic and semantic factors in processing difficulty: An integrated measure", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Demberg", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Keller", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th annual meeting of the association for computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "196--206", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell, J., Lapata, M., Demberg, V., & Keller, F. (2010). Syntactic and semantic factors in process- ing difficulty: An integrated measure. In Proceed- ings of the 48th annual meeting of the association for computational linguistics (pp. 196-206).", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Lexical surprisal as a general predictor of reading time", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the 13th conference of the european chapter of the association for computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "398--408", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lexical surprisal as a general predictor of reading time. In Proceedings of the 13th conference of the european chapter of the association for computa- tional linguistics (pp. 398-408).", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "R: A language and environment for statistical computing", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "R Core Team", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R Core Team. (2013). R: A language and environment for statistical computing [Computer software man- ual].", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A mathematical theory of communication", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "E" |
| ], |
| "last": "Shannon", |
| "suffix": "" |
| } |
| ], |
| "year": 1948, |
| "venue": "Bell System Technical Journal", |
| "volume": "27", |
| "issue": "3", |
| "pages": "379--423", |
| "other_ids": { |
| "DOI": [ |
| "10.1002/j.1538-7305.1948.tb01338.x" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shannon, C. E. (1948). A mathematical the- ory of communication. Bell System Tech- nical Journal, 27(3), 379-423. Retrieved from http://dx.doi.org/10.1002/ j.1538-7305.1948.tb01338.x doi: 10.1002/j.1538-7305.1948.tb01338.x", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Cloze but no cigar: The complex relationship between cloze, corpus, and subjective probabilities in language processing", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the cognitive science society", |
| "volume": "33", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Smith, N., & Levy, R. (2011). Cloze but no cigar: The complex relationship between cloze, corpus, and subjective probabilities in language process- ing. In Proceedings of the cognitive science society (Vol. 33).", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The effect of word predictability on reading time is logarithmic", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Cognition", |
| "volume": "128", |
| "issue": "3", |
| "pages": "302--319", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Smith, N., & Levy, R. (2013). The effect of word pre- dictability on reading time is logarithmic. Cogni- tion, 128(3), 302-319.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Srilm at sixteen: Update and outlook", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Abrash", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ieee automatic speech recognition and understanding workshop", |
| "volume": "5", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stolcke, A., Zheng, J., Wang, W., & Abrash, V. (2011). Srilm at sixteen: Update and outlook. In Proceed- ings of ieee automatic speech recognition and un- derstanding workshop (Vol. 5).", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Cloze procedure\": a new tool for measuring readability", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "L" |
| ], |
| "last": "Taylor", |
| "suffix": "" |
| } |
| ], |
| "year": 1953, |
| "venue": "Journalism quarterly", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taylor, W. L. (1953). \" Cloze procedure\": a new tool for measuring readability. Journalism quarterly.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Stable and efficient multiple smoothing parameter estimation for generalized additive models", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "N" |
| ], |
| "last": "Wood", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Journal of the American Statistical Association", |
| "volume": "99", |
| "issue": "467", |
| "pages": "673--686", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wood, S. N. (2004). Stable and efficient multiple smoothing parameter estimation for generalized ad- ditive models. Journal of the American Statistical Association, 99(467), 673-686.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Generalized additive models: An introduction with R", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "N" |
| ], |
| "last": "Wood", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wood, S. N. (2017). Generalized additive models: An introduction with R (2nd ed.). Chapman and Hall/CRC.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "Improvements in log likelihood for linear models, charted against decreases in perplexity. Distance from the central trend line is indicative of larger departures in log likelihood as a function of perplexity. The blue line represents a linear best fit, with a coefficient of \u22121.66 and R 2 = 0.94 quality language models cannot be trusted to accurately estimate the size of the effect of surprisal on reading times.", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "text": "Changes in the current word's coefficient for linear models, charted against increases in perplexity. Distances from the central trend line are indicative of larger departures of the current word coefficient from the expected trend. Regardless of perplexity, the coefficient is stable. The blue line represents a linear best fit, with a coefficient of \u22122.79 and R 2 = 0.007.", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "text": "Regression plot of coefficients on the previous word. The blue line represents a linear best fit, with a coefficient of 0.001 and R 2 = 0.03.", |
| "num": null, |
| "uris": null |
| }, |
| "TABREF0": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "text": "The", |
| "content": "<table><tr><td>Language Model</td><td>Perplexity (All Tokens)</td><td>Perplexity (Excluding OOV)</td></tr><tr><td>Interpolated-Optimal</td><td>73.39</td><td>73.41</td></tr><tr><td>Interpolated-Balanced</td><td>76.39</td><td>76.36</td></tr><tr><td>LSTM</td><td>113.27</td><td>113.59</td></tr><tr><td>5-gram</td><td>168.98</td><td>161.43</td></tr><tr><td>4-gram</td><td>172.24</td><td>164.56</td></tr><tr><td>3-gram</td><td>191.13</td><td>182.65</td></tr><tr><td>2-gram</td><td>290.88</td><td>278.36</td></tr></table>" |
| }, |
| "TABREF1": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "text": "Perplexity of language models generated either as a LSTM, n-grams, or an interpolation of both the LSTM model as well as the 5-gram model. Perplexities were calculated for the entire Dundee corpus (60, 916 tokens) as well as for only the tokens in the 1b corpus (60, 741 tokens).", |
| "content": "<table/>" |
| }, |
| "TABREF5": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "text": "Log likelihood and F statistics for GAMMs with nonlinear smoothers on all covariates", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |