ACL-OCL / Base_JSON /prefixC /json /cmcl /2021.cmcl-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:17:07.972898Z"
},
"title": "MTL782_IITD at CMCL 2021 Shared Task: Prediction of Eye-Tracking Features Using BERT Embeddings and Linguistic Features",
"authors": [
{
"first": "Shivani",
"middle": [],
"last": "Choudhary",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Delhi Hauz Khas",
"location": {
"settlement": "Delhi-110016",
"country": "India"
}
},
"email": "shivani@sire.iitd.ac.in"
},
{
"first": "Kushagri",
"middle": [],
"last": "Tandon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Delhi Hauz Khas",
"location": {
"settlement": "Delhi-110016",
"country": "India"
}
},
"email": ""
},
{
"first": "Raksha",
"middle": [],
"last": "Agarwal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Delhi Hauz Khas",
"location": {
"settlement": "Delhi-110016",
"country": "India"
}
},
"email": ""
},
{
"first": "Niladri",
"middle": [],
"last": "Chatterjee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology Delhi Hauz Khas",
"location": {
"settlement": "Delhi-110016",
"country": "India"
}
},
"email": "niladri@maths.iitd.ac.in"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Reading and comprehension are quintessentially cognitive tasks. Eye movement acts as a surrogate to understand which part of a sentence is critical to the process of comprehension. The aim of the shared task is to predict five eye-tracking features for a given word of the input sentence. We experimented with several models based on LGBM (Light Gradient Boosting Machine) Regression, ANN (Artificial Neural Network) and CNN (Convolutional Neural Network), using BERT embeddings and some combination of linguistic features. Our submission using CNN achieved an average MAE of 4.0639 and ranked 7th in the shared task. The average MAE was further lowered to 3.994 in post task evaluation.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Reading and comprehension are quintessentially cognitive tasks. Eye movement acts as a surrogate to understand which part of a sentence is critical to the process of comprehension. The aim of the shared task is to predict five eye-tracking features for a given word of the input sentence. We experimented with several models based on LGBM (Light Gradient Boosting Machine) Regression, ANN (Artificial Neural Network) and CNN (Convolutional Neural Network), using BERT embeddings and some combination of linguistic features. Our submission using CNN achieved an average MAE of 4.0639 and ranked 7th in the shared task. The average MAE was further lowered to 3.994 in post task evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Eye tracking data gauged during the process of natural and comprehensive reading can be an outset to understand which part of the sentence demands more attention. The main objective of the present experiment is to understand the factors responsible for determining how we perceive and process languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The CMCL-2021 shared task (Hollenstein et al., 2021) focuses on predicting the eye-tracking metrics for a word. The goal of the task is to train a predictive model for five eye-tracking feature values namely, nFix (Number of fixations), FFD (First fixation duration), TRT (Total reading time), GPT (Go past time), and fixProp (fixation proportion) for a given word of a sentence (Hollenstein et al., 2018; Inhoff et al., 2005) . Here, nFix is the total number of fixations on the current word, FFD is the duration of the first fixation on the prevailing word, TRT is the sum of all fixation durations on the current word including regressions, GPT is the sum of all fixations prior to progressing to the right of the current word, including regressions to previous words that originated from the current word and fixProp is the proportion of the participants who fixated on the current word. With respect to eye-tracking data, regression refers to the backward movement of the eye required to reprocess the information in the text (Eskenazi and Folk, 2017) .",
"cite_spans": [
{
"start": 26,
"end": 52,
"text": "(Hollenstein et al., 2021)",
"ref_id": null
},
{
"start": 379,
"end": 405,
"text": "(Hollenstein et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 406,
"end": 426,
"text": "Inhoff et al., 2005)",
"ref_id": "BIBREF8"
},
{
"start": 1031,
"end": 1056,
"text": "(Eskenazi and Folk, 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we have experimented with two broad categories of models: regessor based and neural networks based. Among the regressor based models, we tried with Catboost, XGboost, Light Gradient Boosting Machine (LGBM) among others. Among the Neural Network based models we have used both ANN and CNN. LGBM gave the best results among the regressor based models. CNN produced lowest MAE between CNN and ANN. In this paper we discuss the best models of each type and their corresponding parameters in detail.",
"cite_spans": [
{
"start": 289,
"end": 301,
"text": "ANN and CNN.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is divided into the following sections: Section 2 describes some details of the dataset used for the experiments. In Section 3 we discuss the data preparation approaches for feature extraction. Model details are presented in Section 4, and Section 5 presents analysis of the results. Section 6 concludes the paper. The code for the proposed system is available at https: //github.com/shivaniiitd/Eye_tracking",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The present task uses the eye-tracking data of the Zurich Cognitive Language Processing Corpus (ZuCo 1.0 and ZuCo 2.0) (Hollenstein et al., 2018 (Hollenstein et al., , 2020 . The dataset is divided into two subsets Train, and Test. The data statistics are presented in Table 1 . The data was arranged according to the sentence_id and word_id. The Train data set contained the values of nFix, GPT, FFD, TRT and fixProp for each word of the input sentences. We used the first 100 sentences from the Train data for validation purposes. It is important to identify the features that provide essential visual and cognitive cues about each word which in turn govern the corresponding various eye-tracking metrics for the word. In the present work we have used BERT embeddings along with linguistic features (Agarwal et al., 2020) to train the predictive models. Mean Absolute Error (MAE) was used for measuring the performance of the proposed systems for the shared task.",
"cite_spans": [
{
"start": 119,
"end": 144,
"text": "(Hollenstein et al., 2018",
"ref_id": "BIBREF5"
},
{
"start": 145,
"end": 172,
"text": "(Hollenstein et al., , 2020",
"ref_id": "BIBREF6"
},
{
"start": 801,
"end": 823,
"text": "(Agarwal et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 269,
"end": 276,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "2"
},
{
"text": "Before feature extraction, the following preprocessing steps were performed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "2"
},
{
"text": "\u2022 The <EOS> tag and extra white spaces were stripped from the end of the words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "2"
},
{
"text": "\u2022 Sentences were created by sequentially joining the words having the same sentence_id.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "2"
},
{
"text": "\u2022 Additionally, for CNN and ANN models punctuations were removed from the input word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "2"
},
{
"text": "Initially the essential token-level attributes were extracted as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "1. Syllables: The number of syllables in a token determines its pronunciation. The sentences were tokenized using the spaCy (Honnibal et al., 2020) , and the syllables 1 package was used to calculate the number of syllables in each token.",
"cite_spans": [
{
"start": 124,
"end": 147,
"text": "(Honnibal et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "2. BERT Embeddings: The Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) embeddings are contextualized word representations. We have considered the average of the embeddings from the last four hidden layers. The py-torch_pretrained _bert 2 uncased embeddings have been used to extract this feature for each token.",
"cite_spans": [
{
"start": 87,
"end": 108,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "The above-mentioned features are extracted token-wise but in the training set some input words (which includes both singleton tokens and hyphenated phrases) contained more than one token, e.g. 'seventh-grade'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "The final attributes that were used for the LGBM models according to each input word are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "\u2022 BERT Embeddings: BERT embeddings for the input word is calculated by averaging the embeddings over all the tokens that make up the word, extracted using the BertTokenizer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "\u2022 Syllables: For extracting the syllables for each input word , we sum the number of syllables over all the tokens in that word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "\u2022 Word_id: This feature was supplied in the dataset. It indicates the position of each word or phrase in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "\u2022 Word_length: The total number of characters present in each input word or phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "Some additional features, such as POS tag, detailed tag, NER tag, dependency label and a Boolean value to indicate whether a token is present in the list of standard English stopwords or not, were also considered. However, these features have not been incorporated in the final models as these features failed to improve the models' performances. To get the values of these features for the input words, the properties of the last token in the input word are used, unless it is a punctuation. In that case the properties of the token before the punctuation are used. To account for the above, two additional features were considered: (a) a binary feature (HasHyphen) to indicate whether the phrase contains a hyphen or not;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "(b) the number of punctuation (NumPunct) in the phrase;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "For illustration, for the input phrase 'Brandenburg-Kulmbach,' the feature HasHyphen is 1 and NumPunct is 2, and for the other features mentioned above, the token 'Kulmbach' was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.1"
},
{
"text": "In this section we present the details of the three predictive machine learning regression models namely, LGBM, ANN and CNN. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Models",
"sec_num": "4"
},
{
"text": "LGBM is a Gradient Boosting Decision Tree (GBDT) algorithm which uses two novel techniques: Gradient-based One-Side Sampling (GOSS) and Exclusive Feature Bundling (EFB) to deal with a large number of data instances and large number of features respectively (Ke et al., 2017) . GOSS keeps all the data instances in the GBDT with large gradients and performs random sampling on the instances with small gradients. The sparsity of feature space in high dimensional data provides a possibility to design a nearly lossless approach to reduce the number of features. Many features in a sparse feature are mutually exclusive. These exclusive features are bundled into a single feature (called an exclusive feature bundle).",
"cite_spans": [
{
"start": 257,
"end": 274,
"text": "(Ke et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LGBM Model",
"sec_num": "4.1"
},
{
"text": "Five LGBM Regressor models from the Light-GBM python package 3 were trained and tuned on varied feature spaces. These models were trained with BERT Embeddings which is present in all models as a feature, along with different combinations of linguistic features, namely, Word_id, Word_length, and Syllables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LGBM Model",
"sec_num": "4.1"
},
{
"text": "In the context of the given problem, the following hyperparameters were tuned,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LGBM Model",
"sec_num": "4.1"
},
{
"text": "\u2022 lambda_l1 (\u03bb 1 ): It is the L1 regularization parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LGBM Model",
"sec_num": "4.1"
},
{
"text": "\u2022 lambda_l2 (\u03bb 2 ): It is the L2 regularization parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LGBM Model",
"sec_num": "4.1"
},
{
"text": "\u2022 num_leaves (NL): This is the main parameter to control the complexity of the tree model, and governs the leaf-wise growth of the tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LGBM Model",
"sec_num": "4.1"
},
{
"text": "3 https://github.com/microsoft/LightGBM The hyperparameters, namely \u03bb 1 , \u03bb 2 , and NL, the overall model MAE (Avg_MAE) calculated as average of the MAEs corresponding to each eye-tracking metric, and the individual MAE corresponding to each eye tracking metric evaluated on the test sets are described in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 306,
"end": 313,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "LGBM Model",
"sec_num": "4.1"
},
{
"text": "We have applied a seven layer deep ANN for the shared task. First hidden layer has 1024 neurons, followed by 4 hidden layers of sizes 512, 256, 64 and 16 respectively. The output layer is of size 1. For each of the five eye-tracking features, we have trained separate Neural Networks. The ANN is implemented using Keras with tensorflow backend (Chollet et al., 2015) . Adam optimizer (Kingma and Ba, 2017) is used to minimize the loss function (MAE). Rectified linear unit (ReLU) activation is applied on the dense layers. Hyperparameter tuning detail is presented in Section 5. The learning rate is set to decay at a rate of e \u22120.1 after 15 epochs. Dropout layers with dropout rate of 0.2 was placed after the first three hidden layers.",
"cite_spans": [
{
"start": 344,
"end": 366,
"text": "(Chollet et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Neural Network",
"sec_num": "4.2"
},
{
"text": "The proposed CNN model has been implemented with the following configuration. In order to capture the contextual information from the sentence, we have used a context window of size K. We split the whole sentence around that word with a sliding window of length K. We named two matrices as left and right context matrix, formed with preceding and succeeding K-1 words, respectively. If the number of words available for the sliding window is less than K then K-r rows are padded with zero, at the start for the left context matrix, and at the end for the right context matrix. We have conducted experiments for values of K in the set {1, 2, 5, 10, 11, 12}. The best results were obtained for K=10. The left and right context matrices are fed into two different branches of convolutional layers. The left branch has two convolutions with filter sizes 3 \u00d7 3 with ReLU, and 5 \u00d7 5 without ReLU in two separate branches. For further processing outputs from both the branches are concatenated. In the right branch, two convolution layer with 3 \u00d7 3 filter with ReLU are stacked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "4.3"
},
{
"text": "Batch Normalization and ReLU activation are applied on the output of convolutional layers, followed by a pooling layer. The outputs of both the branches are fed into two separate convolutional layers with filter size 64 and kernel size 3 \u00d7 3, followed by two max pooling / average pooling layers with kernel size 2 \u00d7 2. Average pooling has generated the best results. The outputs of the two branches are flattened to obtain two tensors. The resulting tensors are averaged, and this acts as the input to seven fully connected layers with sizes 2048, 1024, 512, 64, 32, 16 and 1, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "4.3"
},
{
"text": "The padding used in the convolutional layer is 'same' which keeps the input and output dimension equal. For each of the five eye-tracking features, we have trained separate Neural Networks. The model was trained with loss function MAE, batch size of 32 and Adam optimizer. ReLU activation function is used for the fully connected layers except the output layer. The learning rate is set to decay at a rate of e \u22120.1 after 15 epochs. The network has a dropout rate of 0.2 on the CNN layers and between the fully connected layers of sizes 2048, 1024, 512, and 64. Hyperparameter tuning details are described below. Table 4 and Table 5 , respectively. DR in ANN was limited to 0.4 since higher value will leave very few connections. The maximum number of trials was set to 20. Pooling method variation was controlled manually. CNN models with Average pooling and NF 64 produced the lowest MAE. Additional experiments were conducted on CNN with feature set word_id, word_length and BERT was analysed for fine dropout rate of 0.42, 0.44 and 0.46 and higher batch size of 256. Learning rate was reduced by 0.2 using Keras callback API ReduceLROnPlateau. EarlyStopping was used to stop the training process if the validation loss stops decreasing. ",
"cite_spans": [],
"ref_spans": [
{
"start": 613,
"end": 632,
"text": "Table 4 and Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "4.3"
},
{
"text": "The comparison among top four performing models, ranked according to MAE, is presented in Table 6. CNN models with the feature space Word_id + Length + BERT , as described in Section 3.1 performed the best with MAE 3.99. It has been observed that Word_id, Length and the BERT embeddings are all present in the feature space of the best performing models, hence these features play an important role in the determination of the eye-tracking metrics. Although, addition of Syllables to the feature space of the LGBM Model did not decrease the MAE corresponding to nFix and FFD. In case of CNN, inclusion of Syllables decreased the MAE corresponding to fixProp. The best result with feature set POS+word_len+word_id+BERT was generated by CNN with an MAE of 4.07. Removal of POS tags as a feature lead to improvement in FFD and TRT however, the overall performance decreased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "The LGBM Models give the best results corresponding to nFix, FFD and fixProp among the top 4 best performing models, While CNN based model performed the best on Avg_MAE and GPT. As we observe in Table 2 , in most of the cases, the removal of Word_id and Length led to a decline in the systems' performance. It is also observed that the complex structure of Neural Networks fail to model some of the features in comparison with LGBM model. These experiments also indicate that the feature space for individual eye-tracking features may be curated separately to achieve a better accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "The aim of the present work is to develop a predictive model for five eye-tracking features. Experiments were conducted using LGBM, ANN and CNN models trained on a feature space consisting of pre-trained BERT embeddings and linguistic features namely, number of syllables, POS tag, Word length and Word_id. The discussed CNN Models achieved the best performance with respect to the test data. Experiments for studying the impor-tance of individual features indicate that POS tag has the lowest impact on the overall MAE, with respect to the CNN Models and that the addition of Syllables to the feature space in LGBM models does not improve the overall performance of the system. It is further observed that individual linguistic features lead to a varied effect on different eye-tracking metrics. Separate tuning of hyperparameters and feature space corresponding to the LGBM and Neural Network based model, for each eye-tracking metric, can improve the overall system performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Even though CNN architecture is more complex, but with the same set of features the LGBM regressor gave almost same results. Currently, we did not perform a rigorous hyperparameter tuning which may be taken up in future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/prosegrinder/python-syllables 2 https://github.com/huggingface/transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://hyperopt.github.io/hyperopt/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Shivani Choudhary acknowledges the support of DST INSPIRE, Department of Science and Technology, Government of India.Raksha Agarwal acknowledges Council of Scientific and Industrial Research (CSIR), India for supporting the research under Grant no: SPM-06/086(0267)/2018-EMR-I. The authors thank Google Colab for the free GPU based instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "LangResearchLab_NC at FinCausal 2020, task 1: A knowledge induced neural net for causality detection",
"authors": [
{
"first": "Raksha",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Ishaan",
"middle": [],
"last": "Verma",
"suffix": ""
},
{
"first": "Niladri",
"middle": [],
"last": "Chatterjee",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation",
"volume": "",
"issue": "",
"pages": "33--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raksha Agarwal, Ishaan Verma, and Niladri Chatterjee. 2020. LangResearchLab_NC at FinCausal 2020, task 1: A knowledge induced neural net for causality detection. In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation, pages 33-39, Barcelona, Spain (Online). COLING.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Regressions during reading: The cost depends on the cause",
"authors": [
{
"first": "A",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Jocelyn",
"middle": [
"R"
],
"last": "Eskenazi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Folk",
"suffix": ""
}
],
"year": 2017,
"venue": "Psychonomic bulletin & review",
"volume": "24",
"issue": "4",
"pages": "1211--1216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael A Eskenazi and Jocelyn R Folk. 2017. Regres- sions during reading: The cost depends on the cause. Psychonomic bulletin & review, 24(4):1211-1216.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cmcl 2021 shared task on eye-tracking prediction",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": ""
},
{
"first": "Cassandra",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Yohei",
"middle": [],
"last": "Oseki",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Pr\u00e9vot",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "San",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Emmanuele Chersoni, Cassandra Ja- cobs, Yohei Oseki, and Enrico Pr\u00e9vot, Laurent San- tus. 2021. Cmcl 2021 shared task on eye-tracking prediction. In Proceedings of the Workshop on Cog- nitive Modeling and Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Data descriptor: ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Rotsztejn",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Troendle",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Pedroni",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Langer",
"suffix": ""
}
],
"year": 2018,
"venue": "Sci. Data",
"volume": "5",
"issue": "1",
"pages": "1--13",
"other_ids": {
"DOI": [
"10.1038/sdata.2018.291"
]
},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Jonathan Rotsztejn, Marius Troen- dle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. Data descriptor: ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence read- ing. Sci. Data, 5(1):1-13.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "ZuCo 2.0: A dataset of physiological recordings during natural reading and annotation",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Troendle",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Langer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "138--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Marius Troendle, Ce Zhang, and Nicolas Langer. 2020. ZuCo 2.0: A dataset of phys- iological recordings during natural reading and an- notation. In Proceedings of the 12th Language Re- sources and Evaluation Conference, pages 138-146, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "spaCy: Industrial-strength Natural Language Processing in Python",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.1212303"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Time course of linguistic information extraction from consecutive words during eye fixations in reading",
"authors": [
{
"first": "Albrecht",
"middle": [
"W"
],
"last": "Inhoff",
"suffix": ""
},
{
"first": "Brianna",
"middle": [
"M"
],
"last": "Eiter",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Radach",
"suffix": ""
}
],
"year": 2005,
"venue": "J. Exp. Psychol. Hum. Percept. Perform",
"volume": "31",
"issue": "5",
"pages": "979--995",
"other_ids": {
"DOI": [
"10.1037/0096-1523.31.5.979"
]
},
"num": null,
"urls": [],
"raw_text": "Albrecht W. Inhoff, Brianna M. Eiter, and Ralph Radach. 2005. Time course of linguistic informa- tion extraction from consecutive words during eye fixations in reading. J. Exp. Psychol. Hum. Percept. Perform., 31(5):979-995.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Lightgbm: A highly efficient gradient boosting decision tree",
"authors": [
{
"first": "Guolin",
"middle": [],
"last": "Ke",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Finley",
"suffix": ""
},
{
"first": "Taifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Weidong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Qiwei",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17",
"volume": "",
"issue": "",
"pages": "3149--3157",
"other_ids": {
"DOI": [
"10.5555/3294996.3295074"
]
},
"num": null,
"urls": [],
"raw_text": "Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. Lightgbm: A highly efficient gradient boost- ing decision tree. In Proceedings of the 31st Interna- tional Conference on Neural Information Processing Systems, NIPS'17, page 3149-3157, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"num": null,
"content": "<table><tr><td>: Data statistics</td></tr><tr><td>3 Data Pre-processing and Feature</td></tr><tr><td>Selection</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF3": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": ""
},
"TABREF5": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "CNN model's performance"
},
"TABREF7": {
"html": null,
"num": null,
"content": "<table><tr><td>Parameters</td><td>Range</td></tr><tr><td>BS</td><td>[16, 32, 64]</td></tr><tr><td>LR</td><td>[1e-3, 1e-4, 1e-5]</td></tr><tr><td>DR</td><td>[0, 0.1, 0.2, 0.3, 0.4]</td></tr></table>",
"type_str": "table",
"text": "CNN Hyperparameter details"
},
"TABREF8": {
"html": null,
"num": null,
"content": "<table><tr><td>: ANN Hyperparameter details</td></tr><tr><td>4.4 Hyperparameter Tuning</td></tr><tr><td>Hyperparameter tuning for CNN was performed</td></tr><tr><td>on Learning Rate (LR), Batch Size (BS), Dropout</td></tr><tr><td>Rate (DR) and Number of Filters (NF) while ANN</td></tr><tr><td>hyperparamter tunning was performed on learning</td></tr><tr><td>rate, batch size, dropout rate. Hyperopt 4 library</td></tr><tr><td>was used for grid search. For CNN and ANN the</td></tr><tr><td>range of values for grid search parameters are pre-</td></tr><tr><td>sented in</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF10": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Analysis of the best performing models"
}
}
}
}