ACL-OCL / Base_JSON /prefixC /json /cmcl /2021.cmcl-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:16:59.520800Z"
},
"title": "PIHKers at CMCL 2021 Shared Task: Cosine Similarity and Surprisal to Predict Human Reading Patterns",
"authors": [
{
"first": "Lavinia",
"middle": [],
"last": "Salicchi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong Polytechnic University",
"location": {}
},
"email": "lavinia.salicchi@connect.polyu.hk"
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e0 di Pisa",
"location": {}
},
"email": "alessandro.lenci@unipi.it"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Eye-tracking psycholinguistic studies have revealed that context-word semantic coherence and predictability influence language processing. In this paper we show our approach to predict eye-tracking features from the ZuCo dataset for the shared task of the Cognitive Modeling and Computational Linguistics (CMCL2021) workshop. Using both cosine similarity and surprisal within a regression model, we significantly improved the baseline Mean Absolute Error computed among five eye-tracking features.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Eye-tracking psycholinguistic studies have revealed that context-word semantic coherence and predictability influence language processing. In this paper we show our approach to predict eye-tracking features from the ZuCo dataset for the shared task of the Cognitive Modeling and Computational Linguistics (CMCL2021) workshop. Using both cosine similarity and surprisal within a regression model, we significantly improved the baseline Mean Absolute Error computed among five eye-tracking features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The shared task proposed by the organizers of the Cognitive Modeling and Computational Linguistics workshop (Hollenstein et al., 2021) requires participant to create systems capable of predicting eye-tracking data from the ZuCo dataset (Hollenstein et al., 2018) . Creating systems to efficiently predict biometrical data may be useful to make prediction about linguistic materials for which we have few or none experimental data, and to make new hypothesis about the internal dynamics of cognitive processes.",
"cite_spans": [
{
"start": 108,
"end": 134,
"text": "(Hollenstein et al., 2021)",
"ref_id": "BIBREF7"
},
{
"start": 236,
"end": 262,
"text": "(Hollenstein et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The approach we propose relies mainly on two factors that have been proved to influence language comprehension: i.) the semantic coherence of a word with the previous ones (Ehrlich and Rayner, 1981) and ii.) its predictability from previous context (Kliegl et al., 2004) . We model the first factor with the cosine similarity (Mitchell et al., 2010; Pynte et al., 2008) between the distributional vectors, representing the context and the target word, produced by different Distributional Semantic Models (DSM) (Lenci, 2018) . We compared 10 state-of-the-art word embedding models, and two different approaches to compute the context vector. We model the predictability of a word within the context with the word-by-word surprisal computed with 3 of the above mentioned models (Hale, 2001; Levy, 2008) . Finally, cosine similarity and surprisal are combined in different regression models to predict eye tracking data.",
"cite_spans": [
{
"start": 172,
"end": 198,
"text": "(Ehrlich and Rayner, 1981)",
"ref_id": "BIBREF4"
},
{
"start": 249,
"end": 270,
"text": "(Kliegl et al., 2004)",
"ref_id": "BIBREF10"
},
{
"start": 326,
"end": 349,
"text": "(Mitchell et al., 2010;",
"ref_id": "BIBREF16"
},
{
"start": 350,
"end": 369,
"text": "Pynte et al., 2008)",
"ref_id": "BIBREF19"
},
{
"start": 511,
"end": 524,
"text": "(Lenci, 2018)",
"ref_id": "BIBREF11"
},
{
"start": 777,
"end": 789,
"text": "(Hale, 2001;",
"ref_id": "BIBREF6"
},
{
"start": 790,
"end": 801,
"text": "Levy, 2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Different word embedding models (GloVe, Word2Vec, WordNet2Vec, FastText, ELMo, BERT) have been evaluated in the framework proposed by Hollenstein et al. (2019) . The evaluation is based on the model capability to reflect semantic representations in the human mind, using cognitive data in different datasets for eye-tracking, EEG, and fMRI. Word embedding models are used to train neural networks on a regression task. The results of their analyses show that BERT, ELMo, and FastText have the best prediction performances.",
"cite_spans": [
{
"start": 134,
"end": 159,
"text": "Hollenstein et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Regression models with different combinations of cosine similarity and surprisal, to predict (and further study the cognitive dynamics beneath) eye movements have been created by Frank (2017), who claims that, since word embeddings are based on co-occurrences, semantic distance may actually represent word predictability, rather than semantic relatedness, and that previous findings showing correlations between reading times and semantic distance were actually due to a confound between these two concepts. In his work, he uses linear regression models testing different surprisal measures, and excluding it. The results show that when surprisal is factored out, the effects of semantic similarity on reading times disappear, proving thus the existence of an interplay between the two elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "3 Experimental Setting",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "The shared task materials come from ZuCo (Hollenstein et al., 2018) , that includes EEG and eyetracking data, collected on 12 English speakers reading natural texts. The data collection has been done in three different settings: two normal reading tasks and one task-specific reading session. The original dataset comprises 1, 107 sentences, and for the shared task 800 sentences (15, 736 words) have been used for the training data, while the test set included about 200 sentences (3, 554 words). Since the shared task focuses on eye-tracking features, only this latter data were available. The training dataset structure includes sentence number, wordwithin-sentence number, word, number of fixations (nFix), first fixation duration (FFD), total reading time (TRT), go-past time (GPT), fixation proportion (fixProp). The first three elements were part of the test set too.",
"cite_spans": [
{
"start": 41,
"end": 67,
"text": "(Hollenstein et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Our approach includes a preliminary step of feature selection. For this purpose we also used GECO (Cop et al., 2017) and Provo (Luke and Christianson, 2018), two eye-tracking corpora containing long, complete, and coherent texts. GECO is a monolingual and bilingual (English and Dutch) corpus composed of the entire Agatha Christie's novel The Mysterious Affair at Styles. GECO contains eye-tracking data of 33 subjects (19 of them bilingual, 14 English monolingual) reading the full novel text, presented paragraph-by-paragraph on a screen. GECO is composed of 54, 364 tokens. Provo contains 55 short English texts about various topics, for a total of 2, 689 tokens, and a vocabulary of 1, 197 words. These texts were read by 85 subjects and their eye-tracking measures were collected in an available on-line dataset. Similarly to ZuCo, GECO and Provo data are recorded during naturalistic reading on everyday life materials. For every word in GECO and Provo, we extracted its mean total reading time, mean first fixation duration, and mean number of fixations, by averaging over the subjects. Table 1 shows the embeddings types used in our experiments, consisting of 6 non-contextualized DSMs and 4 contextualized DSMs. The former include predict models (SGNS and FastText) (Mikolov et al., 2013; Levy and Goldberg, 2014; Bojanowski et al., 2017) and count models (SVD and GloVe) (Bullinaria and Levy, 2012; Pennington et al., 2014) . Four DSMs are window-based and two are syntax-based (synt). Embeddings have 300 dimensions and were trained on the same corpus of about 3.9 billion tokens, which is a concatenation of ukWaC and a 2018 dump of Wikipedia.",
"cite_spans": [
{
"start": 98,
"end": 116,
"text": "(Cop et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 1276,
"end": 1298,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 1299,
"end": 1323,
"text": "Levy and Goldberg, 2014;",
"ref_id": "BIBREF12"
},
{
"start": 1324,
"end": 1348,
"text": "Bojanowski et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 1398,
"end": 1409,
"text": "Levy, 2012;",
"ref_id": "BIBREF1"
},
{
"start": 1410,
"end": 1434,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 1095,
"end": 1102,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Pre-trained contextualized embeddings include the 512-dimensional vectors produced by the three layers of the ELMo bidirectional LSTM architecture (Peters et al., 2018) , the 1, 024-dimensional vectors in the 24 layers of BERT-Large Transformers (BERT-Large, Cased) (Devlin et al., 2019) , the 1, 600-dimensional vectors of GPT2-xl (Radford et al.) , and the 200-dimensional vectors produced by the Neural Complexity model (van Schijndel and Linzen, 2018).",
"cite_spans": [
{
"start": 147,
"end": 168,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 266,
"end": 287,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 332,
"end": 348,
"text": "(Radford et al.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embeddings",
"sec_num": "3.2"
},
{
"text": "To predict eye tracking data we tested different regression models and several features combinations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.3"
},
{
"text": "Feature Selection. To select the features to be used, for each word embedding model and language model we carried out a preliminary investigation computing Spearman's correlation between eye tracking features, and respectively surprisal and cosine similarity: The features with the highest correlation with biometrical data have been selected for being used in the regression model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.3"
},
{
"text": "For each target word w in GECO, Provo and ZuCo, we measure the cosine similarity between the embedding of w and the embedding of the context c composed of the previous words in the same sentence. We then compute the Spearman correlation between the cosine and the eye-tracking data for w. We test two different ways of computing the context embedding: Additive model (for every embedding type): The context vector is the sum of all its word embeddings. Because of the bidirectional nature of BERT, the input to this model needed a special pre-processing. In order to prevent that the vectors representing words within the context were computed using the target word itself, we passed to BERT a list of sub-sentences, each of which were composed of context words only. So given the sentence The dog chases the cat: Starting from the second sub-sentence, the cosine similarity is computed between the last word vector and the sum of words vectors belonging to the previous sub-sentence (list element). Therefore, to compute the cosine similarity between cat and the previous context, we select cat from S[4] and T he + dog + chases + the from S[3]. CLS: The context vector is the embedding produced by BERT for the special token [CLS] . As for the additive model, BERT was fed with subsentences, and for each target word the CLScontext-vector was the one computed at the previous list element. So, looking at the previous example, for cat as target word, we will use the CLS vector representing all the S[3] elements. Given the positive effect of semantic coherence on language processing, we expect that the eyetracking data for w have a negative correlation with its cosine similarity with c: The higher the cosine, the lower the reading time of w measured by eyetracking.",
"cite_spans": [
{
"start": 1227,
"end": 1232,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.3"
},
{
"text": "S[0] = [\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.3"
},
{
"text": "We then used BERT, GPT2-xl and Neural Complexity to compute word-by-word surprisal. As for the cosine similarity, for BERT the input sentences were organized in sub-sentences, and the last token, the target word, was replaced with the special tag [MASK] . Finally, we compute the Spearman correlation between the surprisal of w, and the eye-tracking data for the target word. Differently from the cosine, we expect the surprisal to be positively correlated with the word reading time: The less predictable a word, the slower its processing.",
"cite_spans": [
{
"start": 247,
"end": 253,
"text": "[MASK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.3"
},
{
"text": "The comparison has been done between 60 possible features: 6 values of cosine similarity between non-contextualized vectors, 51 values of cosine similarity between contextualized vectors (48 from 24 layers of BERT in two different ways to compute the context vector, and 3 from ELMo, GPT2-xl and Neural Complexity), 3 values of surprisal from BERT, GPT2-xl, Neural Complexity. Based on the correlation values, we selected one cosine similarity feature and one surprisal feature, that have been combined with two variables that are wellknown in the cognitive neuroscience literature for influencing eye movements: word length and word frequency, the last one computed on Wikipedia 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.3"
},
{
"text": "Regression Model Selection. Taking into account the Spearman's correlations, we selected one word embedding model for cosine similarity and one Language Model for surprisal. Then, different kind of regression models from Scikit-learn have been compared. More precisely, PLS Regression, Multi-layer Perceptron Regressor, Random Forest Regressor, Linear Regression, Ridge Regression, Bayesian ridge regression, Epsilon-Support Vector Regression, Linear regression with combined L1 and L2 priors as Regularizer, Gradient Boosting Regressor. The metric used to evaluate different models is the Mean Absolute Error on ZuCo's eye tracking features prediction. Once the model and the features have been selected, the comparison between 3 different regression settings has been done: i) surprisal only; ii) cosine similarity only; iii) surprisal + cosine similarity. For the regression model selection, we used 2/3 of the ZuCo training set to train the model, and 1/3 for validation purposes. Once we found the best (i.e. lower MAE among eye tracking data) combination of features and regression model, the prediction on test data has been done.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3.3"
},
{
"text": "Spearman's correlations between eye tracking features and cosine similarity showed that best performances are reached by vectors produced by BERT layer 22 CLS context (mean correlation over eye tracking features on the three datasets: \u22120.62), while best correlations between eye tracking data and surprisal are reached by GPT2-xl (mean correlation over eye tracking features on the three datasets: 0.40). These results led us to select as features for regression model: cosine similarity between vectors computed by BERT 22 CLS and surprisal computed by GPT2-xl. We also tested the cosine similarity between vectors computed by GPT2-xl, to have a comparison with a regression model with features produced by the same model. While performing regression model selection comparing 9 models from Scikit-learn, we also tried different combinations of features. Table 2 shows the best 3 combinations of features and models, compared with the baseline created taking into account word frequency and word lenght only. The lowest MAEs for each eye-tracking feature were reached by a Gradient Boosting Regressor (GBR) using both the cosine similarity between vectors produced by BERT and the surprisal computed by GPT2-xl. The average MAE using the GBR model with BERT cosine and GPT2-xl surprisal was 4.22 (mean improvement compared with the baseline = 0.54), with one feature, fixProp, producing a MAE value significantly higher than the other eye tracking features. Since fixProp is \"the proportion of participants that fixated the current word\" (i.e., the probability of the word of being fixed), we hypothesized that the combination of phenomena influencing the likelihood of fixating a word could be captured by the other 4 eye tracking features, making them in turn good predictors of fixProp.",
"cite_spans": [],
"ref_spans": [
{
"start": 856,
"end": 863,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Therefore, we tested again the 9 regression models with Scikit-learn, this time using nFix, FFD, TRT, GPT, word lenght and word frequency as features, in every possible permutation (one per time, pairs of features, etc.). A lower MAE on fixProp on training data has been obtained using a Random Forest method with nFix, TRT, and GPT, reaching a MAE of 3.15.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "The improvements of the final model over the baseline suggest that the information conveyed by the cosine similarity and the surprisal contributes in modeling the cognitive processing beneath reading. Our results are consistent with Pynte et al. (2008) and Mitchell et al. (2010) findings about the relation between cosine similarity and eye movements data, as well as with Hale (2001) and Levy (2008) , who found surprisal to be useful in predicting reading times. Anyway, our model performance shows that taking into account both the computational measures benefits the modeling. Even if Frank (2017) rises an interesting issue about the interplay between the information included in word embeddings and the one provided by the suprisal computed by language models, our results keep us from fully agree with his observations: since the joined model performed better that the ones taking into account only cosine similarity or only surprisal, it is obvious that the two measures convey exclusive and useful information, even if it is more than plausible that they share some kind of information to some extent.",
"cite_spans": [
{
"start": 233,
"end": 252,
"text": "Pynte et al. (2008)",
"ref_id": "BIBREF19"
},
{
"start": 257,
"end": 279,
"text": "Mitchell et al. (2010)",
"ref_id": "BIBREF16"
},
{
"start": 374,
"end": 385,
"text": "Hale (2001)",
"ref_id": "BIBREF6"
},
{
"start": 390,
"end": 401,
"text": "Levy (2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "In summary, we used a two-step approach: i.) the final model to predict nFix, FFD, GPT, and TRT in test data was a Gradient Boosting Regressor having as features the cosine similarity between the CLS vector (BERT) and the target word embedding, GPT2-xl surprisal, word length and word frequency; ii.) the predicted values of nFix, GPT, and TRT were used in a Random Forest to predict fixProp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "The shared task final results over the test data, revealed that our model had an average MAE of 4.3877 over all eye tracking features (the baseline was 7.3699, while the best model reached a MAE of 3.8134).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "In this paper we described the system we proposed in the CMCL2021 \"Shared Task: Predicting human reading patterns\". We were required to create a model capable of predicting number of fixations, first fixation duration, total reading time, go-past time, and fixation proportion of each word in the ZuCo dataset. We proposed a regression model using word length and word frequency, combined with two elements that are proved to influence reading processing: the semantic coherence and the predictability of a word within the context. To compute these last two regression features we used the cosine similarity between the vector representing the context and the word embedding of the target word, and the surprisal computed by Language Models, respectively. We selected the models to produce the vectors and to compute the surprisal calculating the Spearman correlation between the cosine similarity and the eye tracking data, and between the surprisal and the same data. We then used the best cosine similarity and surprisal within a regression model, selected among 9 possible models. Our results outperformed the baseline, with a average MAE among eye tracking features just 0.5743 higher than the best model in the competition. Our model may be improved exploring new types of regressors and word embeddings, and including new textual features such as sentence length and information regarding words immediately preceding the target ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Using https://github.com/IlyaSemenov/wikipedia-wordfrequency",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Enriching Word Vectors with Subword Information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Extracting Semantic Representations from Word Co-Occurrence Statistics",
"authors": [
{
"first": "A",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Joseph P",
"middle": [],
"last": "Bullinaria",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2012,
"venue": "Stop-Lists, Stemming, and SVD. Behavior Research Methods",
"volume": "44",
"issue": "",
"pages": "890--907",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John A Bullinaria and Joseph P Levy. 2012. Ex- tracting Semantic Representations from Word Co- Occurrence Statistics: Stop-Lists, Stemming, and SVD. Behavior Research Methods, 44(3):890-907.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Presenting GECO: An Eye-Tracking Corpus of Monolingual and Bilingual Sentence Reading",
"authors": [
{
"first": "Uschi",
"middle": [],
"last": "Cop",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Dirix",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Drieghe",
"suffix": ""
},
{
"first": "Wouter",
"middle": [],
"last": "Duyck",
"suffix": ""
}
],
"year": 2017,
"venue": "Behavior Reseach Methods",
"volume": "49",
"issue": "2",
"pages": "602--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uschi Cop, Nicolas Dirix, Denis Drieghe, and Wouter Duyck. 2017. Presenting GECO: An Eye-Tracking Corpus of Monolingual and Bilingual Sentence Reading. Behavior Reseach Methods, 49(2):602- 615.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of NAACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Contextual Effects on Word Perception and Eye Movements During Reading",
"authors": [
{
"first": "Susan",
"middle": [
"E"
],
"last": "Ehrlich",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Rayner",
"suffix": ""
}
],
"year": 1981,
"venue": "Journal of Verbal Learning and Verbal Behavior",
"volume": "20",
"issue": "",
"pages": "641--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan E. Ehrlich and Keith Rayner. 1981. Contex- tual Effects on Word Perception and Eye Movements During Reading. Journal of Verbal Learning and Verbal Behavior, 20:641-65.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Word Embedding Distance Does not Predict Word Reading Time",
"authors": [
{
"first": "L",
"middle": [],
"last": "Stefan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of CogSci",
"volume": "",
"issue": "",
"pages": "385--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan L Frank. 2017. Word Embedding Distance Does not Predict Word Reading Time. In Proceedings of CogSci, pages 385-390.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Probabilistic Earley Parser as a Psycholinguistic Model",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hale",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hale. 2001. A Probabilistic Earley Parser as a Psycholinguistic Model. In Proceedings of NAACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Cmcl 2021 shared task on eye-tracking prediction",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": ""
},
{
"first": "Cassandra",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Yohei",
"middle": [],
"last": "Oseki",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Pr\u00e9vot",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Emmanuele Chersoni, Cassandra Ja- cobs, Yohei Oseki, Laurent Pr\u00e9vot, and Enrico San- tus. 2021. Cmcl 2021 shared task on eye-tracking prediction. In Proceedings of the Workshop on Cog- nitive Modeling and Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "CogniVal: A Framework for Cognitive Word Embedding Evaluation",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "De La Torre",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Langer",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of CONLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Antonio de la Torre, Nicolas Langer, and Ce Zhang. 2019. CogniVal: A Framework for Cognitive Word Embedding Evaluation. In Proceed- ings of CONLL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Zuco, a simultaneous eeg and eye-tracking resource for natural sentence reading. Scientific Data",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Rotsztejn",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Troendle",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Pedroni",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Langer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Jonathan Rotsztejn, Marius Troen- dle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. Zuco, a simultaneous eeg and eye-tracking re- source for natural sentence reading. Scientific Data, 5.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Length, frequency, and predictability effects of words on eye movements in reading",
"authors": [
{
"first": "Reinhold",
"middle": [],
"last": "Kliegl",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Grabner",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Rolfs",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Engbert",
"suffix": ""
}
],
"year": 2004,
"venue": "European Journal of Cognitive Psychology -EUR J COGN PSYCHOL",
"volume": "16",
"issue": "",
"pages": "262--284",
"other_ids": {
"DOI": [
"10.1080/09541440340000213"
]
},
"num": null,
"urls": [],
"raw_text": "Reinhold Kliegl, Ellen Grabner, Martin Rolfs, and Ralf Engbert. 2004. Length, frequency, and predictabil- ity effects of words on eye movements in reading. European Journal of Cognitive Psychology -EUR J COGN PSYCHOL, 16:262-284.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distributional Models of Word Meaning",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2018,
"venue": "Annual Review of Linguistics",
"volume": "4",
"issue": "",
"pages": "151--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Lenci. 2018. Distributional Models of Word Meaning. Annual Review of Linguistics, 4:151-171.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dependency-Based Word Embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependency- Based Word Embeddings. In Proceedings of ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Expectation-based Syntactic Comprehension",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2008,
"venue": "Cognition",
"volume": "106",
"issue": "3",
"pages": "1126--1177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Levy. 2008. Expectation-based Syntactic Com- prehension. Cognition, 106(3):1126-1177.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Provo Corpus: A Large Eye-tracking Corpus with Predictability Norms",
"authors": [
{
"first": "G",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Kiel",
"middle": [],
"last": "Luke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Christianson",
"suffix": ""
}
],
"year": 2018,
"venue": "Behavior Research Methods",
"volume": "50",
"issue": "2",
"pages": "826--833",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven G Luke and Kiel Christianson. 2018. The Provo Corpus: A Large Eye-tracking Corpus with Predictability Norms. Behavior Research Methods, 50(2):826-833.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Efficient Estimation of Word Representations in Vector Space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Syntactic and Semantic Factors in Processing Difficulty: An Integrated Measure",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell, Mirella Lapata, Vera Demberg, and Frank Keller. 2010. Syntactic and Semantic Factors in Processing Difficulty: An Integrated Measure. In Proceedings of ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Glove: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep Contextualized Word Representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Rep- resentations. In Proceedings of NAACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Online Contextual Influences During Reading Normal Text: A Multiple-Regression Analysis. Vision research",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Pynte",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "New",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Kennedy",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "48",
"issue": "",
"pages": "2172--2183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Pynte, Boris New, and Alan Kennedy. 2008. On- line Contextual Influences During Reading Normal Text: A Multiple-Regression Analysis. Vision re- search, 48(21):2172-2183.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Language Models are Unsupervised Multitask Learners",
"authors": [
{
"first": "A",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. Language Models are Unsupervised Multitask Learners. In Open-AI Blog.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A Neural Model of Adaptation in Reading",
"authors": [
{
"first": "Marten",
"middle": [],
"last": "Van Schijndel",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marten van Schijndel and Tal Linzen. 2018. A Neural Model of Adaptation in Reading. In Proceedings of EMNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "The\"] S[1] = [\"The dog\"] S[2] = [\"The dog chases\"] S[3] = [\"The dog chases the\"] S[4] = [\"The dog chases the cat\"]",
"uris": null,
"num": null
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>GPT2-xl</td><td>Pretrained GPT2-xl embeddings on WebText</td></tr><tr><td>Neural Complexity</td><td>Pretrained Neural Complexity embeddings on Wikipedia</td></tr></table>",
"num": null,
"text": "ModelHyperparameters Non-contextualized DSMs SVD.w2count DSM with 345K window-selected context words, window of width 2, reduced with SVD SVD.synt count DSM with 345K syntactically typed context words reduced with SVD GloVe count DSM with context window of width 2, reduced with log-bilinear regression SGNS.w2Skip-gram with negative sampling, context window of width 2, 15 negative examples SGNS.synt Skip-gram with negative sampling, syntactically-typed context words, 15 negative examples FastText Skip-gram with subword information, context window of width 2, 15 negative examples Contextualized DSMs ELMo Pretrained ELMo embeddings on the 1 Billion Word Benchmark BERT Pretrained BERT-Large embeddings on the concatenation of the Books corpus and Wikipedia",
"html": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "List of the embedding models used for the study, together with their hyperparameter settings.",
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Best three MAEs for each eye-tracking feature + baseline.",
"html": null
}
}
}
}