| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:17:12.877686Z" |
| }, |
| "title": "Team ReadMe at CMCL 2021 Shared Task: Predicting Human Reading Patterns by Traditional Oculomotor Control Models and Machine Learning", |
| "authors": [ |
| { |
| "first": "Ali\u015fan", |
| "middle": [], |
| "last": "Balkoca", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Middle East Technical University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "Can" |
| ], |
| "last": "Algan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Middle East Technical University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Cengiz", |
| "middle": [], |
| "last": "Acart\u00fcrk", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Middle East Technical University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "\u00c7agr\u0131", |
| "middle": [], |
| "last": "\u00c7\u00f6ltekin", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of T\u00fcbingen", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This system description paper describes our participation in CMCL 2021 shared task on predicting human reading patterns. Our focus in this study is making use of well-known, traditional oculomotor control models and machine learning systems. We present experiments with a traditional oculomotor control model (the EZ Reader) and two machine learning models (a linear regression model and a recurrent network model), as well as combining the two different models. In all experiments we test effects of features well-known in the literature for predicting reading patterns, such as frequency, word length and word predictability. Our experiments support the earlier findings that such features are useful when combined. Furthermore, we show that although machine learning models perform better in comparison to traditional models, combination of both gives a consistent improvement for predicting multiple eye tracking variables during reading.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This system description paper describes our participation in CMCL 2021 shared task on predicting human reading patterns. Our focus in this study is making use of well-known, traditional oculomotor control models and machine learning systems. We present experiments with a traditional oculomotor control model (the EZ Reader) and two machine learning models (a linear regression model and a recurrent network model), as well as combining the two different models. In all experiments we test effects of features well-known in the literature for predicting reading patterns, such as frequency, word length and word predictability. Our experiments support the earlier findings that such features are useful when combined. Furthermore, we show that although machine learning models perform better in comparison to traditional models, combination of both gives a consistent improvement for predicting multiple eye tracking variables during reading.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Oculomotor control in reading has been a wellestablished domain in eye tracking research. From the perspective of perceptual and cognitive mechanisms that drive eye movement control, the characteristics of the visual stimuli is better controlled in reading research than visual scene stimuli. Several computational models have been developed for the past two decades, which aimed at modeling the relationship between a set of independent variables, such as word characteristics and dependent variables, such as fixation duration and location (Kliegl et al., 2006) .", |
| "cite_spans": [ |
| { |
| "start": 542, |
| "end": 563, |
| "text": "(Kliegl et al., 2006)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Based on the theoretical and experimental research in reading, the leading independent variables include the frequency of a word in daily use, the length of the word and its sentential predictability. The term sentential predictability (or word predictability) is used to define predictability score which is the probability of guessing a word from the sequence of previous words of the sentence (Kliegl et al., 2004) . The dependent variables include numerous metrics, including fixation duration metrics such as first fixation duration (FFD) and total gaze time on a word, as well as location and numerosity metrics such as the location of a fixation on a word and gaze-regressions.", |
| "cite_spans": [ |
| { |
| "start": 396, |
| "end": 417, |
| "text": "(Kliegl et al., 2004)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A major caveat of the computational models that have been developed since the past two decades is that they weakly address linguistic concepts beyond the level of the fixated word, with a few exceptions, such as the spillover effects related to the preview of a next word n+1 during the current fixation on word n (Engbert et al., 2005) . These models are also limited in their recognition of syntactic, semantic and discourse characteristics of the text due to their complexity, despite they are indispensable aspects of reading for understanding. Machine Learning (ML) models of oculomotor control address some of those limitations by presenting high predictive power. However, the holistic approach of the learning models has drawbacks in terms of the explainability of the model underpinnings. In this study, we present experiments with a traditional model of oculomotor control in reading, namely the EZ Reader (Reichle et al., 2003) , two ML models (a regression model and a recurrent network model), and their combination. We present an evaluation of the results by focusing on the model inputs that reveal relatively higher accuracy.", |
| "cite_spans": [ |
| { |
| "start": 314, |
| "end": 336, |
| "text": "(Engbert et al., 2005)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 916, |
| "end": 938, |
| "text": "(Reichle et al., 2003)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Accordingly, the aim of the present paper is to investigate the effectiveness of both types of models and their combinations on predicting human reading behavior as set up by the CMCL 2021 shared task (Hollenstein et al., 2021) . The task is defined as predicting five eye-tracking features, namely number of fixations (nFix), first fixation duration (FFD), total reading time (TRT), go-past time (GPT), and fixation proportion (fixProp). The eye-tracking data of the Zurich Congitive Language Processing Corpus (ZuCo 1.0 and ZuCo 2.0) were used (Hollenstein et al., 2018) , (Hollenstein et al., 2019) . Details of these variables and further information on the data set can be found in the task description paper (Hollenstein et al., 2021) .", |
| "cite_spans": [ |
| { |
| "start": 201, |
| "end": 227, |
| "text": "(Hollenstein et al., 2021)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 546, |
| "end": 572, |
| "text": "(Hollenstein et al., 2018)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 575, |
| "end": 601, |
| "text": "(Hollenstein et al., 2019)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 714, |
| "end": 740, |
| "text": "(Hollenstein et al., 2021)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We created our models and identified the input features following the findings in research on oculomotor control in reading. The previous studies have shown that word length, frequency and sentential predictability are well known parameters that influence eye movement patterns in reading (Rayner, 1998) . There exist further parameters that influence eye movement characteristics. For instance, the location of a word in the sentence has been proposed as a predictor on First Fixation Duration (Kliegl et al., 2006) . Therefore, we used those additional parameters to improve the accuracy of the learning models, as well as running a traditions oculomotor control model (viz., the EZ Reader) with its required parameter set. Below we present a description of the models that have been employed in the present study.", |
| "cite_spans": [ |
| { |
| "start": 289, |
| "end": 303, |
| "text": "(Rayner, 1998)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 495, |
| "end": 516, |
| "text": "(Kliegl et al., 2006)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EZ Reader has been developed as a rule-based model of oculomotor control in reading. It predicts eye movement parameters, such as single fixation duration, first fixation duration and total reading time. The model efficiently addresses some of experimental research findings in the reading literature. For example, a saccade completion takes about 20-50 msec to complete, and saccade length is about 7-9 characters (Rayner, 2009) . The model consists of three main processing modules. The oculomotor system is responsible for generating and executing saccades by calculating the saccade length. The visual system controls the attention of the reader. Finally, the word identification system calculates the time for identifying a word, mainly based on the word length and the frequency of word in daily use. EZ Reader accepts four arguments as its input; frequency (count in million), word length (number of characters), sentential predictability of the word, and the word itself. The output features of the model are given in Table 1 .", |
| "cite_spans": [ |
| { |
| "start": 415, |
| "end": 429, |
| "text": "(Rayner, 2009)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1026, |
| "end": 1033, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The EZ Reader Model", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "Among those features, FFD and TT outputs of EZ Reader directly map to FFD and TRT (Total Reading Time) in the training data of the CMCL", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The EZ Reader Model", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "Feature Description EZ-SFD Single Fixation Duration EZ-FFD First Fixation Duration EZ-GD Gaze Duration EZ-TT Total Reading Time EZ-PrF Fixation Probability EZ-Pr1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The EZ Reader Model", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "Probability of making exactly one fixation EZ-Pr2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The EZ Reader Model", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "Probability of making two or more fixations EZ-PrS Probability of skipping Table 2 presents the Mean Absolute Error (MAE) values of the test set, when predicted by the EZ Reader model. In the design of the EZ Reader model, we assumed the simulation count as 2000 participants, which means that the model runs 2000 distinct simulations and the result scores consist of the average of the simulation results. 2000 participants is the optimum number for our case in terms of simulation time and the MAE it produces. Above 2000 participants MEA did not decrease significantly.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 75, |
| "end": 82, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The EZ Reader Model", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "A major challenge in designing the EZ Reader model is that the model is not able to produce the output values for some of the words, labeling them Infinity. Those are usually long words with relatively low frequency. In order to find an optimal value to fill in the Infinity slots, we calculated the mean absolute error between TRT of the training data and the TT values of EZ Reader model results, as an operational assumption. The calculation returned 284 ms. Figure 1 shows the MAE scores over given values between 0 to 500. This value is close to the average fixation duration for skilled readers which is about 200ms -250ms (Rayner, 1998) . Therefore, we preserved the assumption in model development pipeline. In the present study, besides calculating the mean absolute error values for the EZ Reader model, we employed the outputs of the EZ Reader model as inputs to the LSTM model. Below, we present the model for the Linear Baseline and the LSTM model.", |
| "cite_spans": [ |
| { |
| "start": 629, |
| "end": 643, |
| "text": "(Rayner, 1998)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 462, |
| "end": 470, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The EZ Reader Model", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "Our linear model is a least-squares regression model with L2 regularization (ridge regression). The input features to the regression model include the main word-level features, frequency, word length and predictability discussed above. Word frequencies are computed using a large news corpus of approximately 2.7 million articles. 1 The predictability features are obtained using the recurrent network described in Section 2.2. Besides these features, we also include some linguistic features including the POS tag, dependency relation, and signed distance from the head, as well as named entity tag. The POS and dependency information is obtained using version 1.2 of UDPipe using the pretrained models released by the authors (Straka and Strakov\u00e1, 2017; Straka and Strakov\u00e1, 2019) . We used Apache OpenNLP (Apache Software Foundation, 2014) for named entity recognition. The model input also included indicator features for beginning and end of sentence, and whether the word is combined with a punctuation mark or not (see Table 3 ). We also included the features from EZ-reader described in Section 2.1.1 as additional inputs to the regression model.", |
| "cite_spans": [ |
| { |
| "start": 728, |
| "end": 755, |
| "text": "(Straka and Strakov\u00e1, 2017;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 756, |
| "end": 782, |
| "text": "Straka and Strakov\u00e1, 2019)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1026, |
| "end": 1033, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Linear Baseline", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "The predictions were based on a symmetric window around the target word, where all the above features for the target word and \u00b1k words were concatenated. We selected the optimal window size as well as the regularization constant (alpha) 1 'All the news' data set, available from https://components.one/datasets/ all-the-news-2-news-articles-dataset/. Named entity category (person and company names, etc.) LB EZ Reader simulation outputs see Table 1 LB-LSTM for the ridge regression model via grid search. The grid search is used to determine a single same alpha and single window size for all target variables. We use the ridge regression implementation of the Python scikit-learn library (Pedregosa et al., 2011) .", |
| "cite_spans": [ |
| { |
| "start": 690, |
| "end": 714, |
| "text": "(Pedregosa et al., 2011)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 442, |
| "end": 449, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Linear Baseline", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "The LSTM model consists of an LSTM layer with 128 units followed by two dense layers and 5dimensional output layer. The input features of the model include word length in total number of characters, word predictability, frequency per million, the location of the word in the sentence, the presence of a punctuation before the word, the presence of the punctuation at the end, and the end of sentence, being the last word of the sentence or not. Finally, the input features included the outputs of the typical EZ Reader model (given in Table 1 ). The output features of the LSTM model the variables identified by the CMCL 2021 share task, namely nFix, FFD, GPT, TT, and fixProp.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 535, |
| "end": 543, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "LSTM Model", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "Sentential predictability of a word in a context is a well-established predictor of eye movement patterns in reading (Fern\u00e1ndez et al., 2014; Kliegl et al., 2004; Clifton et al., 2016) . We used two methods to generate the predictability values. First, we used the average human predictability scores from the Provo Corpus (Luke and Christianson, 2018), which is a public eye-tracking dataset collected from real participants. The Provo Corpus includes the cloze task results in which participants are given the starting word of the sentence and expected to guess the next word. The actual word is shown after the participant's guess and prediction for the next word is expected. This process continues for all of the words. Prediction value is generated for each word in corpus by simply calculating the ratio of the correct guesses to all guesses for the word. We selected 1.0 as the default prediction value for words which does not exist in the Provo Corpus. The mean absolute error for TRT between EZ Reader output and CMCL train data was at minimum when default prediction value is 1.0. Second, we developed a separate LSTM model that produced sentential predictability values. For this, we trained the model by Wikipedia. 2 Since the primary goal of the model was to predict eye movement patterns per word, we built a word-level language model. The model consisted of two LSTM layers with 1200 hidden units. It was trained with a learning rate of 0.0001, and a dropout value was set to 0.2, with the Adam optimizer. After the training, we obtained the predictability scores for each word based on their sentential context. These scores were then used as an additional feature in our final model besides the other features, such as word length and frequency.", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 141, |
| "text": "(Fern\u00e1ndez et al., 2014;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 142, |
| "end": 162, |
| "text": "Kliegl et al., 2004;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 163, |
| "end": 184, |
| "text": "Clifton et al., 2016)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predictability Scores", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Provo Corpus predictability values are independent from the context of text used in the shared task. However using predictability values from the first method gave better results than the calculated predictability. Therefore we used Provo Corpus predictability values for the results in the following sections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predictability Scores", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We participated in the CMCL 2021 shared task with two submissions, one with the linear model described in Section 2.1.2, and the other with the LSTM model (Section 2.1.3). Table 4 presents the scores of our system in the competition, in comparison to mean baseline and the best system. Our systems perform substantially better than the baseline, and the difference between the scores of the participating teams are comparatively small. Among our models, the linear model performed slightly better, obtaining 10th place in the compe- 2 We use the sentence segmented corpus from https://www.kaggle.com/mikeortman/ wikipedia-sentences. tition. However, experimenting with the LSTM model gave us more information about the basic features of eye movements in reading and their effects on fixation durations. For the remainder of this section, we will present further experiments with the LSTM model, demonstrating the effects of various features discussed above.", |
| "cite_spans": [ |
| { |
| "start": 533, |
| "end": 534, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 172, |
| "end": 179, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To demonstrate the effectiveness of the features described above, we trained a number of models that employed a set of input variables in isolation, as well as the models trained by their combination.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Further Experiments", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In particular, we trained a model on frequency, then predictability, and then word length. Then we trained models by their combinations as input features. Each model produced a MAE (mean absolute error) value. We then calculated the average of the MAE values for each model output (nFix, FFD, GPT, TRT, and FixProp) and their Standard Deviation (SD). Finally, we calculated how far each model was away from the average MAE in terms of the SDs. Table 5 presents MAE scores for each setting. The figures in the Appendix A show the distance of the models from the center of the circle, where the center represents the best MAE score and the circle represents the distance covered by one SD (Standard Deviation) from the best accuracy (i.e. the center). The models that received the combination of frequency, predictability, word length and E-Z SFD (i.e., E-Z Reader's single fixation duration prediction) as the input returned the best MAE values for four of five dependent variables. As an example, consider the MAE values for the models developed for predicting the nFix (the number of fixations on a word). Figure 2 shows that the majority of the models that are based on features in isolation have relatively lower predictability compared to the models that take a combination of the features as the inputs. In particular, the predictability model (i.e., the model that is solely based on the predictability values as the input feature) has a mean MAE value 1.75 times the SD (Standard Deviation). Similarly, the word-length model has approximately 3 times the SD from the best score, and the EZ-SFD model (i.e., the model that is solely based on the single fixation duration predictions of the EZ Reader model) has a mean MAE value far away from the mean by 2.5 times the SD value.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 444, |
| "end": 451, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 1107, |
| "end": 1115, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Further Experiments", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In this paper, we analyzed a linear baseline model and an LSTM model that employed the outputs of a traditional model as its inputs. We built models with input features in isolation, and their combination. The evaluation of the mean absolute errors (MAE) supported a major finding in reading research: The oculomotor processes in reading are influenced by multiple factors. Temporal and spatial aspects of eye movement control are determined by linguistic factors as well as low-level nonlinguistic factors (Rayner, 1998; Kliegl and Engbert, 2013) . The models that employ their combinations return higher accuracy. Our findings also indicate that besides the frequently used features in the literature, the EZ-SFD (single frequency duration outputs of the EZ Reader model) may contribute to the accuracy of the learning based models. Nevertheless, given the high variability of the machine learning model outputs a systematic investigation is necessary that address several operational assumptions in the present study. In particular, future research should improve statistical analysis for comparing the model outputs. It should also address the influence of the location of a word in a sentence, besides its interaction with the duration measures. Last but not the least, future research on developing ML models of oculomotor control in reading should focus on the relationship between the aspects of the ML model design and basic findings in reading research. The GCMW (Gaze Contingent Moving Window) paradigm and the boundary paradigm (Rayner, 2014) are some examples of those findings that could be used for oculomotor control modeling. ", |
| "cite_spans": [ |
| { |
| "start": 507, |
| "end": 521, |
| "text": "(Rayner, 1998;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 522, |
| "end": 547, |
| "text": "Kliegl and Engbert, 2013)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1540, |
| "end": 1554, |
| "text": "(Rayner, 2014)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Apache Software Foundation. 2014. openNLP Natural Language Processing Library", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Apache Software Foundation. 2014. openNLP Nat- ural Language Processing Library. http:// opennlp.apache.org/.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Eye movements in reading and information processing: Keith Rayner's 40 year legacy", |
| "authors": [ |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Clifton", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernanda", |
| "middle": [], |
| "last": "Ferreira", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "M" |
| ], |
| "last": "Henderson", |
| "suffix": "" |
| }, |
| { |
| "first": "Albrecht", |
| "middle": [ |
| "W" |
| ], |
| "last": "Inhoff", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [ |
| "P" |
| ], |
| "last": "Liversedge", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [ |
| "D" |
| ], |
| "last": "Reichle", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [ |
| "R" |
| ], |
| "last": "Schotter", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Journal of Memory and Language", |
| "volume": "86", |
| "issue": "", |
| "pages": "1--19", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/J.JML.2015.07.004" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Charles Clifton, Fernanda Ferreira, John M. Hen- derson, Albrecht W. Inhoff, Simon P. Liversedge, Erik D. Reichle, and Elizabeth R. Schotter. 2016. Eye movements in reading and information process- ing: Keith Rayner's 40 year legacy. Journal of Mem- ory and Language, 86:1-19.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "SWIFT: a dynamical model of saccade generation during reading", |
| "authors": [ |
| { |
| "first": "Ralf", |
| "middle": [], |
| "last": "Engbert", |
| "suffix": "" |
| }, |
| { |
| "first": "Antje", |
| "middle": [], |
| "last": "Nuthmann", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Eike", |
| "suffix": "" |
| }, |
| { |
| "first": "Reinhold", |
| "middle": [], |
| "last": "Richter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kliegl", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Psychological review", |
| "volume": "112", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ralf Engbert, Antje Nuthmann, Eike M Richter, and Reinhold Kliegl. 2005. SWIFT: a dynamical model of saccade generation during reading. Psychological review, 112(4):777.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Eye movements during reading proverbs and regular sentences: The incoming word predictability effect. Language", |
| "authors": [ |
| { |
| "first": "Gerardo", |
| "middle": [], |
| "last": "Fern\u00e1ndez", |
| "suffix": "" |
| }, |
| { |
| "first": "Diego", |
| "middle": [ |
| "E" |
| ], |
| "last": "Shalom", |
| "suffix": "" |
| }, |
| { |
| "first": "Reinhold", |
| "middle": [], |
| "last": "Kliegl", |
| "suffix": "" |
| }, |
| { |
| "first": "Mariano", |
| "middle": [], |
| "last": "Sigman", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Cognition and Neuroscience", |
| "volume": "29", |
| "issue": "3", |
| "pages": "260--273", |
| "other_ids": { |
| "DOI": [ |
| "10.1080/01690965.2012.760745" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gerardo Fern\u00e1ndez, Diego E. Shalom, Reinhold Kliegl, and Mariano Sigman. 2014. Eye movements during reading proverbs and regular sentences: The incom- ing word predictability effect. Language, Cognition and Neuroscience, 29(3):260-273.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Cmcl 2021 shared task on eye-tracking prediction", |
| "authors": [ |
| { |
| "first": "Nora", |
| "middle": [], |
| "last": "Hollenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Emmanuele", |
| "middle": [], |
| "last": "Chersoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Cassandra", |
| "middle": [], |
| "last": "Jacobs", |
| "suffix": "" |
| }, |
| { |
| "first": "Yohei", |
| "middle": [], |
| "last": "Oseki", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurent", |
| "middle": [], |
| "last": "Pr\u00e9vot", |
| "suffix": "" |
| }, |
| { |
| "first": "Enrico", |
| "middle": [], |
| "last": "Santus", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nora Hollenstein, Emmanuele Chersoni, Cassandra Ja- cobs, Yohei Oseki, Laurent Pr\u00e9vot, and Enrico San- tus. 2021. Cmcl 2021 shared task on eye-tracking prediction. In Proceedings of the Workshop on Cog- nitive Modeling and Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading. Scientific data", |
| "authors": [ |
| { |
| "first": "Nora", |
| "middle": [], |
| "last": "Hollenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Rotsztejn", |
| "suffix": "" |
| }, |
| { |
| "first": "Marius", |
| "middle": [], |
| "last": "Troendle", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Pedroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Ce", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Langer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "5", |
| "issue": "", |
| "pages": "1--13", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nora Hollenstein, Jonathan Rotsztejn, Marius Troen- dle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading. Scientific data, 5(1):1-13.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Zuco 2.0: A dataset of physiological recordings during natural reading and annotation", |
| "authors": [ |
| { |
| "first": "Nora", |
| "middle": [], |
| "last": "Hollenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Marius", |
| "middle": [], |
| "last": "Troendle", |
| "suffix": "" |
| }, |
| { |
| "first": "Ce", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Langer", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1912.00903" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nora Hollenstein, Marius Troendle, Ce Zhang, and Nicolas Langer. 2019. Zuco 2.0: A dataset of physi- ological recordings during natural reading and anno- tation. arXiv preprint arXiv:1912.00903.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Evaluating a computational model of eye-movement control in reading", |
| "authors": [ |
| { |
| "first": "Reinhold", |
| "middle": [], |
| "last": "Kliegl", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralf", |
| "middle": [], |
| "last": "Engbert", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Models, simulations, and the reduction of complexity", |
| "volume": "", |
| "issue": "", |
| "pages": "153--178", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reinhold Kliegl and Ralf Engbert. 2013. Evaluating a computational model of eye-movement control in reading. In Models, simulations, and the reduction of complexity, pages 153-178. De Gruyter.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Length, frequency, and predictability effects of words on eye movements in reading", |
| "authors": [ |
| { |
| "first": "Reinhold", |
| "middle": [], |
| "last": "Kliegl", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Grabner", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Rolfs", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralf", |
| "middle": [], |
| "last": "Engbert", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "European journal of cognitive psychology", |
| "volume": "16", |
| "issue": "1-2", |
| "pages": "262--284", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reinhold Kliegl, Ellen Grabner, Martin Rolfs, and Ralf Engbert. 2004. Length, frequency, and predictability effects of words on eye movements in reading. Euro- pean journal of cognitive psychology, 16(1-2):262- 284.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Tracking the mind during reading: The influence of past, present, and future words on fixation durations", |
| "authors": [ |
| { |
| "first": "Reinhold", |
| "middle": [], |
| "last": "Kliegl", |
| "suffix": "" |
| }, |
| { |
| "first": "Antje", |
| "middle": [], |
| "last": "Nuthmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralf", |
| "middle": [], |
| "last": "Engbert", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Journal of experimental psychology: General", |
| "volume": "135", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reinhold Kliegl, Antje Nuthmann, and Ralf Engbert. 2006. Tracking the mind during reading: The in- fluence of past, present, and future words on fixa- tion durations. Journal of experimental psychology: General, 135(1):12.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "The provo corpus: A large eye-tracking corpus with predictability norms", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Steven", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiel", |
| "middle": [], |
| "last": "Luke", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Christianson", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Behavior research methods", |
| "volume": "50", |
| "issue": "2", |
| "pages": "826--833", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven G Luke and Kiel Christianson. 2018. The provo corpus: A large eye-tracking corpus with predictabil- ity norms. Behavior research methods, 50(2):826- 833.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Scikit-learn: Machine learning in Python", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pedregosa", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Varoquaux", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Gramfort", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Michel", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Thirion", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Grisel", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Blondel", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Prettenhofer", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Dubourg", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Vanderplas", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Passos", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cournapeau", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Brucher", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Perrot", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Duchesnay", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2825--2830", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Eye movements in reading and information processing: 20 years of research", |
| "authors": [ |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Rayner", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Psychological bulletin", |
| "volume": "124", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psy- chological bulletin, 124(3):372.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Eye Movements in Reading: Models and Data", |
| "authors": [ |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Rayner", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Journal of Eye Movement Research", |
| "volume": "2", |
| "issue": "5", |
| "pages": "1--10", |
| "other_ids": { |
| "DOI": [ |
| "10.16910/jemr.2.5.2" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Keith Rayner. 2009. Eye Movements in Reading: Mod- els and Data. Journal of Eye Movement Research, 2(5):1-10.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The gaze-contingent moving window in reading: Development and review", |
| "authors": [ |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Rayner", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Visual Cognition", |
| "volume": "22", |
| "issue": "3-4", |
| "pages": "242--258", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Keith Rayner. 2014. The gaze-contingent moving win- dow in reading: Development and review. Visual Cognition, 22(3-4):242-258.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The EZ reader model of eye-movement control in reading: Comparisons to other models. Behavioral and brain sciences", |
| "authors": [ |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Erik D Reichle", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Rayner", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pollatsek", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "26", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erik D Reichle, Keith Rayner, and Alexander Pollatsek. 2003. The EZ reader model of eye-movement con- trol in reading: Comparisons to other models. Be- havioral and brain sciences, 26(4):445.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Tokenizing, pos tagging, lemmatizing and parsing UD 2.0 with UD-Pipe", |
| "authors": [ |
| { |
| "first": "Milan", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| }, |
| { |
| "first": "Jana", |
| "middle": [], |
| "last": "Strakov\u00e1", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
| "volume": "", |
| "issue": "", |
| "pages": "88--99", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Milan Straka and Jana Strakov\u00e1. 2017. Tokenizing, pos tagging, lemmatizing and parsing UD 2.0 with UD- Pipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 88-99, Vancouver, Canada. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Universal dependencies 2.5 models for UDPipe", |
| "authors": [ |
| { |
| "first": "Milan", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| }, |
| { |
| "first": "Jana", |
| "middle": [], |
| "last": "Strakov\u00e1", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Milan Straka and Jana Strakov\u00e1. 2019. Universal de- pendencies 2.5 models for UDPipe (2019-12-06).", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "LINDAT/CLARIAH-CZ digital library at the Insti- tute of Formal and Applied Linguistics (\u00daFAL), Fac- ulty of Mathematics and Physics, Charles Univer- sity.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Mean Absolute Error scores over given values for Infinity slot" |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "MAE scores for fixPropFigure 6: MAE scores for TRT" |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "num": null, |
| "text": "EZ Reader output features.", |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">EZ Reader Training Data</td><td>MAE</td></tr><tr><td>TT</td><td>Total Reading Time</td><td>3.25</td></tr><tr><td>FFD</td><td>First Fixation Duration</td><td>9.14</td></tr></table>" |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "num": null, |
| "text": "", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "num": null, |
| "text": "Input features used in Linear Baseline and LSTM model.", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF5": { |
| "type_str": "table", |
| "num": null, |
| "text": "Official scores (MAE) of our models in comparison to mean baseline and the first team (LAST) in the competition.", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF7": { |
| "type_str": "table", |
| "num": null, |
| "text": "MAE for with different feature combinations.", |
| "html": null, |
| "content": "<table/>" |
| } |
| } |
| } |
| } |