ACL-OCL / Base_JSON /prefixL /json /lincr /2020.lincr-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:10:42.310344Z"
},
"title": "Towards best practices for leveraging human language processing signals for natural language processing",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ETH Zurich",
"location": {}
},
"email": "noraho@ethz.ch"
},
{
"first": "Maria",
"middle": [],
"last": "Barrett",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {}
},
"email": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Beinborn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {}
},
"email": "l.m.beinborn@uva.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "NLP models are imperfect and lack intricate capabilities that humans access automatically when processing speech or reading a text. Human language processing data can be leveraged to increase the performance of models and to pursue explanatory research for a better understanding of the differences between human and machine language processing. We review recent studies leveraging different types of cognitive processing signals, namely eye-tracking, M/EEG and fMRI data recorded during language understanding. We discuss the role of cognitive data for machine learning-based NLP methods and identify fundamental challenges for processing pipelines. Finally, we propose practical strategies for using these types of cognitive signals to enhance NLP models.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "NLP models are imperfect and lack intricate capabilities that humans access automatically when processing speech or reading a text. Human language processing data can be leveraged to increase the performance of models and to pursue explanatory research for a better understanding of the differences between human and machine language processing. We review recent studies leveraging different types of cognitive processing signals, namely eye-tracking, M/EEG and fMRI data recorded during language understanding. We discuss the role of cognitive data for machine learning-based NLP methods and identify fundamental challenges for processing pipelines. Finally, we propose practical strategies for using these types of cognitive signals to enhance NLP models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Machine learning methods for natural language processing (NLP) are imperfect and still lack the intricate capabilities that humans access automatically when processing speech or reading a text. For instance, humans are able to resolve coreferences and to perform natural language inference, while machine learning methods are not nearly as good (Wang et al., 2019) . Human language processing data can be recorded and used to increase the performance of NLP models and to pursue explanatory research in understanding which \"human-like\" skills our models are still missing. Linking brain activity and machine learning can increase our understanding of the contents of brain representations, and consequently in how to use these representations to understand, improve and evaluate machine learning methods for NLP. Our aim in this paper is to find common patterns and approaches that have been implemented successfully when leveraging human language processing signals for NLP. The main objective is to guide researchers when navigating the challenges that are unavoidable when working with cognitive data sources. In recent years, an increasing number of studies using human language processing for improving and evaluating NLP models have emerged. However, consistent practices in pre-processing, feature extraction, and using the human data in the models have not yet been established. Physiological and neuroimaging data is inherently noisy and may also be subject to idiosyncrasy, which makes it more difficult to effectively apply machine learning algorithms. For example, in eye-tracking, an extended fixation duration indicates more complex cognitive processing, but it is not obvious which process is occurring. Brain imaging signals help to better locate cognitive processes in the brain, but it is difficult to disentangle the signal pertinent to the task of interest from the noise related to other cognitive processes which are irrelevant for language processing (e.g., motor control, vision, etc.) . In this paper, we review recent NLP studies leveraging dif-ferent types of human language processing signals, namely eye-tracking, electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) recorded during language understanding. We discuss the role of cognitive data for machine learningbased NLP methods and identify fundamental challenges for processing pipelines. Based on this discussion, we propose practical strategies for using these types of cognitive signals to augment NLP models. Finally, we explore the ethical considerations of working with human data in NLP.",
"cite_spans": [
{
"start": 345,
"end": 364,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF114"
},
{
"start": 1963,
"end": 2009,
"text": "processing (e.g., motor control, vision, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this section, we introduce eye-tracking, EEG, MEG and fMRI as recording techniques of cognitive signals. We describe the technical details and methodological challenges for each technique and discuss how the signals have been used to improve NLP models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cognitive signals",
"sec_num": "2."
},
{
"text": "Eye-tracking signals are recorded with a device that tracks the eye movements in a non-intrusive way, most commonly using infra-red light and a camera. Depending on the sampling rate of the recording device, it provides very finegrained temporal records of one or both eyes. When a skilled reader reads, the eyes move rapidly from one word to the next, sequentially fixating through the text. Some words are not fixated at all due to an intricate interplay of preview and predictability effects, and some words are fixated several times due to factors such as syntactic reanalysis. The fact that some words are fixated several times makes it possible to study several stages of linguistic cognitive processing. Early gaze measures capture lexical access and early syntactic processing and are based on the first time a word is fixated. Late measures reflect the late syntactic (re-)processing and general disambiguation. These features occur in words that are fixated more than once. Around 10-15% of the fixations are regressions, where the eye focus jumps back to re-read a part of the text. Each fixation lasts on average around 200 ms, but the variation is large and the duration of each fixation has shown to be reliably linked to many word attributes: syntactic, semantic, and discourse-related. The fixation duration can thus be taken as a proxy for cognitive processing. It is out of the scope of this paper to dig into experimental findings, but Rayner (1998) provides an extensive survey. This psycholinguistic line of research has established a range of eye movement features enabling the study of both early and late cognitive textual processing.",
"cite_spans": [
{
"start": 1455,
"end": 1468,
"text": "Rayner (1998)",
"ref_id": "BIBREF96"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Eye-tracking",
"sec_num": "2.1."
},
{
"text": "Eye-tracking signals in NLP Eye movement data has successfully been leveraged to improve a wide range of NLP tasks on several text levels, from part-of-speech tagging (Barrett et al., 2016a) to text summarization (Xu et al., 2009) . Table 1 shows an overview of the earliest references for each NLP task. In NLP, the eye tracking signal can be incorporated into models by using the scanpath which denotes the entire fixation trajectory over a text span. Scanpaths can reveal syntactic re-analysis, text difficulty, and other comprehension problems.",
"cite_spans": [
{
"start": 167,
"end": 190,
"text": "(Barrett et al., 2016a)",
"ref_id": "BIBREF8"
},
{
"start": 213,
"end": 230,
"text": "(Xu et al., 2009)",
"ref_id": "BIBREF120"
}
],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Eye-tracking",
"sec_num": "2.1."
},
{
"text": "Larger-scale computational approaches include Klerke et al. (2018) , Von der Malsburg and Vasishth (2011), Wallot et al. (2015) . Furthermore, Mishra et al. (2017a) learned the gaze representation in a convolutional neural network directly from the scanpath instead of manually selecting features. This might be a promising approach to increase the amount of gaze data available for training and avoid feature engineering.",
"cite_spans": [
{
"start": 46,
"end": 66,
"text": "Klerke et al. (2018)",
"ref_id": "BIBREF67"
},
{
"start": 107,
"end": 127,
"text": "Wallot et al. (2015)",
"ref_id": "BIBREF113"
},
{
"start": 143,
"end": 164,
"text": "Mishra et al. (2017a)",
"ref_id": "BIBREF86"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Eye-tracking",
"sec_num": "2.1."
},
{
"text": "While low-cost eye-trackers and webcam-based software (e.g., Papoutsaki et al. (2016) ) have recently entered the market, performance evaluations have shown that low cost models have a much higher data loss (Funke et al., 2016) . Dalmaijer (2014) and Gibaldi et al. (2017) find accuracy and precision acceptable but they mention the low sampling rate as a constraint for research. Reading research using eye movements are dependent on high sampling rate and good -not just acceptable -accuracy and precision. While lower precision can be compensated for with larger font sizes and using only the central part of the screen, it does not seem like the current low-cost models are recommendable for reading research due to these factors. Especially when building a large corpus it is worth considering that any validity or reliability loss such as systematic bias (for example, degrading in precision and accuracy towards the periphery of the screen), as well as unsystematic bias (low data quality due to low sampling rate or large data loss), will propagate to all works using this resource.",
"cite_spans": [
{
"start": 61,
"end": 85,
"text": "Papoutsaki et al. (2016)",
"ref_id": "BIBREF91"
},
{
"start": 207,
"end": 227,
"text": "(Funke et al., 2016)",
"ref_id": "BIBREF40"
},
{
"start": 230,
"end": 246,
"text": "Dalmaijer (2014)",
"ref_id": "BIBREF26"
},
{
"start": 251,
"end": 272,
"text": "Gibaldi et al. (2017)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges in recording eye tracking signals",
"sec_num": null
},
{
"text": "The electrical activity of neurons in the brain produces currents spreading through the head. These currents also reach the scalp surface, and the resulting voltage fluctuations on the scalp can be recorded as the electroencephalogram (EEG). The neuronal currents inside the head produce magnetic fields which can be measured above the scalp surface as the magnetoencephalogram (MEG). EEG signals reflect electrical brain activity with millisecond-accurate temporal resolution, but poor spatial resolution. Magnetic fields are less distorted than electric fields by the skull and scalp, which results in a better spatial resolution for MEG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EEG & MEG",
"sec_num": "2.2."
},
{
"text": "EEG signals have achieved fairly good results for classifying mental tasks (e.g., Zhang et al. (2018) ) or text difficulty (Chen et al., 2012) . Moreover, Parthasarathy and Busso (2017) presented a multi-task learning architecture for classifying emotions from auditory EEG stimulus. Additionally, Murphy and Poesio (2010) detect semantic categories (i.e. types of nouns, binary classification) from simultaneous EEG and MEG recordings, and found MEG to be more informative for this specific task. However, there is not much work in higher-level semantic or syntactic NLP tasks with larger number of classes due to the low signal-to-noise ratio. Hollenstein et al. (2019a) achieved only modest improvements when using EEG data for sentiment analysis, relation extraction and named entity recognition. For a review on the use of EEG signals for different classification tasks, including an overview of the ML methods, the artifact pre-processing strategies, and the input features, see Craik et al. (2019) . Further, there has been some work in understanding the parallels between machine and EEG language processing signals. For instance, Hale et al. (2018) showed that neural grammar models are able to learn some of the language processing effects that are manifested in EEG. Moreover, Wehbe et al. (2014b) were the first to align word-by-word MEG activity with embeddings from a recurrent neural language model. Schwartz et al. (2019) use MEG and fMRI to fine-tune a BERT language model (Devlin et al., 2019) and showed that the relationship between language and brain activity learned by BERT during this fine-tuning, transfers across multiple participants and performs well on downstream NLP tasks. In a similar fashion, Toneva and Wehbe (2019) compare and interpret word and sequence embeddings from various recent language models on word-by-word MEG and fMRI recordings.",
"cite_spans": [
{
"start": 82,
"end": 101,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF122"
},
{
"start": 123,
"end": 142,
"text": "(Chen et al., 2012)",
"ref_id": "BIBREF22"
},
{
"start": 646,
"end": 672,
"text": "Hollenstein et al. (2019a)",
"ref_id": "BIBREF57"
},
{
"start": 985,
"end": 1004,
"text": "Craik et al. (2019)",
"ref_id": "BIBREF25"
},
{
"start": 1139,
"end": 1157,
"text": "Hale et al. (2018)",
"ref_id": "BIBREF47"
},
{
"start": 1288,
"end": 1308,
"text": "Wehbe et al. (2014b)",
"ref_id": "BIBREF116"
},
{
"start": 1415,
"end": 1437,
"text": "Schwartz et al. (2019)",
"ref_id": "BIBREF100"
},
{
"start": 1490,
"end": 1511,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EEG & MEG signals in NLP",
"sec_num": null
},
{
"text": "MEG and EEG data contain a large ratio of noise as well as signals from other non-language-related processes, but syntactic and semantic text processing is also known to contribute to the signal. Since EEG merely records signals on the brain surface, it is difficult to draw conclusions about which brain regions are more or less helpful for NLP models. MEG allows to localize the magnetic fields to their sources within the brain with good spatial resolution. The main challenge lies in cleaning the M/EEG recordings and extracting only the signals containing language processing information. First, artifacts from motor and ocular activities have to be removed. Recently, these tedious manual inspection and cleaning steps have been automatized (e.g., Pedroni et al. (2019) ), and efforts to unfold the electrophysiological responses from overlapping, continuous stimuli are being introduced (Ehinger and Dimigen, 2019) . Neuroscientists have studied in detail how to filter the M/EEG data based on certain effects occurring during language understanding, and the activity occurring in certain frequency bands. Two popular ways to analyze the EEG signal are power spectrum analysis and event-related potentials (ERPs). In power spectrum analyses, the average power of a signal in a specific frequency range is computed. The EEG signal is decomposed into functionally distinct frequency bands. These frequency ranges, which are fixed ranges of wave frequencies and amplitudes over a time scale, are known to correlate with certain cognitive functions. Theta activity (4-8 Hz) reflects cognitive control and working memory (Williams et al., 2019) ; alpha activity (8-12 Hz) has been related to attentiveness (Klimesch, 2012) ; beta frequencies (12-30 Hz) affect decisions regarding relevance, for instance, in term relevance tasks for information retrieval (Eugster et al., 2014) : and gamma-band activity (30-100 Hz) has been used to detect emotions (Li and Lu, 2009) . Hypotheses about the role of the various M/EEG frequency bands in language processing and more general cognitive function are a first step, but more work is needed to establish stronger hypotheses linking language to specific frequencies (Alday, 2019) . Secondly, ERPs are measured brain responses that are the direct result of a specific sensory, cognitive, or motor event. For instance, the N400 component, which peaks \u223c400ms after the onset of the stimulus, is part of the normal brain response to words and other meaningful stimuli (Kutas and Federmeier, 2000) . Brouwer et al. (2017) presented a neuro-computational model based on recurrent neural networks, that successfully simulates the N400 and P600 amplitude in language comprehension. To the best of our knowledge, it has not yet been studied how useful ERP features are for improving natural language understanding tasks.",
"cite_spans": [
{
"start": 754,
"end": 775,
"text": "Pedroni et al. (2019)",
"ref_id": "BIBREF93"
},
{
"start": 894,
"end": 921,
"text": "(Ehinger and Dimigen, 2019)",
"ref_id": "BIBREF33"
},
{
"start": 1623,
"end": 1646,
"text": "(Williams et al., 2019)",
"ref_id": "BIBREF118"
},
{
"start": 1708,
"end": 1724,
"text": "(Klimesch, 2012)",
"ref_id": "BIBREF68"
},
{
"start": 1857,
"end": 1879,
"text": "(Eugster et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 1951,
"end": 1968,
"text": "(Li and Lu, 2009)",
"ref_id": "BIBREF74"
},
{
"start": 2209,
"end": 2222,
"text": "(Alday, 2019)",
"ref_id": "BIBREF5"
},
{
"start": 2507,
"end": 2535,
"text": "(Kutas and Federmeier, 2000)",
"ref_id": "BIBREF71"
},
{
"start": 2538,
"end": 2559,
"text": "Brouwer et al. (2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges in processing EEG & MEG signals",
"sec_num": null
},
{
"text": "FMRI is a neuroimaging technique that measures brain activity by the changes in the oxygen level of the blood. This technique relies on the fact that cerebral blood flow and neuronal activation are coupled: When a brain area is in use, blood flow to that area increases. FMRI produces 3D scans of the brain with high spatial resolution of the signal. For statistical analyses, the brain scan is fragmented into voxels which are cubes of constant size. The signal is interpreted as an activation value for every voxel. The number of voxels varies depending on the precision of the scanner and the size and shape of the participant's brain. The voxel location can be identified with 3dimensional coordinates, but the signal is commonly processed as a flattened vector which ignores the spatial relationships between the voxels. This rather naive modeling assumption simplifies the signal, but might lead to cognitively and biologically implausible findings. Most publicly available fMRI datasets have already undergone common statistical filters. These pre-processing steps correct for motion of the participant's head, account for different timing of the scan slices and adjust linear trends in the signal (Wikibooks, 2020) . In addition, the scans of the individual brains (which vary in size and shape) need to be aligned with a standardized template to group voxels into brain regions and allow for comparisons across subjects. Researchers using datasets that have been collected and published by another lab should be aware of the effect of these probabilistic corrections. They are necessary to further analyze the signal, but might also systematically add noise to the data and lead to misinterpretations.",
"cite_spans": [
{
"start": 1205,
"end": 1222,
"text": "(Wikibooks, 2020)",
"ref_id": "BIBREF117"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FMRI",
"sec_num": "2.3."
},
{
"text": "In their pioneering work, Mitchell et al. (2008) measure the brain signal of nine human participants who are instructed to think about a concept. They average the signal for each of the 60 concepts over multiple trials. Their analysis results indicate that it is possible to distinguish between the correct and a random scan by computationally modeling the relations between concepts. Their dataset has become an evaluation benchmark to compare the cognitive plausibility of different word representation models (Fyshe et al., 2014; S\u00f8gaard, 2016; Abnar et al., 2018; Anderson et al., 2017; Bulat et al., 2017) . The presentation of individual concepts has the advantage that the signal can be directly linked to the experimental stimulus, but the experimental setup is very artificial compared to authentic language processing scenarios. Recently, fMRI datasets involving more naturalistic language stimuli such as sentences (Pereira et al., 2018) and even full stories (Wehbe et al., 2014a; Brennan, 2016; Huth et al., 2016; Dehghani et al., 2017) have been recorded and facilitate contextualized modeling of language processing. 1 Besides using fMRI signals to better understand and evaluate the structure of computational models of language, the signal has also been used to directly improve the performance on NLP tasks. Bingel et al. (2016) 2019showed that when the language model BERT (Devlin et al., 2019) is fine-tuned to align with brain recordings, it performs better at syntactic tasks such as subject-verb agreements. These result indicate a transfer of knowledge from human language processing to NLP tasks. So far, the reported improvements are very small and have not yet been verified on other datasets.",
"cite_spans": [
{
"start": 26,
"end": 48,
"text": "Mitchell et al. (2008)",
"ref_id": "BIBREF88"
},
{
"start": 512,
"end": 532,
"text": "(Fyshe et al., 2014;",
"ref_id": "BIBREF42"
},
{
"start": 533,
"end": 547,
"text": "S\u00f8gaard, 2016;",
"ref_id": "BIBREF103"
},
{
"start": 548,
"end": 567,
"text": "Abnar et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 568,
"end": 590,
"text": "Anderson et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 591,
"end": 610,
"text": "Bulat et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 926,
"end": 948,
"text": "(Pereira et al., 2018)",
"ref_id": "BIBREF95"
},
{
"start": 971,
"end": 992,
"text": "(Wehbe et al., 2014a;",
"ref_id": "BIBREF115"
},
{
"start": 993,
"end": 1007,
"text": "Brennan, 2016;",
"ref_id": "BIBREF17"
},
{
"start": 1008,
"end": 1026,
"text": "Huth et al., 2016;",
"ref_id": "BIBREF62"
},
{
"start": 1027,
"end": 1049,
"text": "Dehghani et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 1326,
"end": 1346,
"text": "Bingel et al. (2016)",
"ref_id": "BIBREF14"
},
{
"start": 1392,
"end": 1413,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "FMRI signals in NLP",
"sec_num": null
},
{
"text": "As it takes several seconds to complete a full scan of the brain, the measured brain response cannot provide high temporal resolution. In addition, the hemodynamic response to a stimulus can only be measured with a delay of several seconds (Miezin et al., 2000) and it decays slowly. As a consequence, it is not possible to directly align fMRI responses with single words when they are presented as continuous stimuli. The delay can be modeled using hemodynamic response functions or more complex modeling techniques, but they do not work equally well in all areas of the brain (Shain et al., 2019) . It has not yet been investigated conclusively whether the fMRI signal is temporally fine-grained enough to detect syntax processing signals in the human brain. Gauthier and Levy (2019) showed experiments where only local grammatical dependencies can be decoded. However, Brennan et al. (2016) showed that for the right features (i.e. a count of tree nodes in a probabilistic context-free grammar model) fMRI is fast enough. More recent NLP studies avoid word-level alignment of fMRI data and analyse longer sequences of words instead (Schwartz et al., 2019; Abnar et al., 2019) . The large number of voxels in the fMRI representation leads to a very high-dimensional signal, but the number of stimuli is usually very small for machine learning standards. In order to fit a model, the dimensionality of the signal needs to be reduced because analysis methods such as correlation or similarity metrics often lead to unintuitive results when applied in high-dimensional spaces (Aggarwal et al., 2001) . From a processing perspective, data-driven dimensionality reduction methods on the training set are most attractive because they can work on the raw signal and do not rely on theory-driven assumptions (Kriegeskorte et al., 2006) . Examples are classification metrics such as explained variance which capture how much information a voxel contributes to a specific task (as in LaConte et al. (2003) and Michel et al. (2011) ). Another option are dimensionality reduction methods such as principal component analysis which reduce the dimensions while retaining most of the variance between responses (Gauthier and Levy, 2019) . Unfortunately, existing fMRI datasets for language processing are not yet large enough to enable direct representation learning, for example using autoencoders (Huang et al., 2017; Rowtula et al., 2018) . Instead, the signal is often restricted to voxels that fall within a pre-selected set of regions. These regions are commonly selected in a theorydriven manner based on neurolinguistic studies (Brennan et al., 2016; Wehbe et al., 2014a) . Fedorenko et al. (2010) proposed a method to selects regions of interest functionally, i.e. pooling of data from corresponding functional regions across subjects. For instance, Abnar et al. (2019) only include the voxels from the top k regions that are most similar across different subjects given the same stimuli. Due to the technical requirements, fMRI studies mostly use only a small set of stimuli which makes it hard to evaluate the effect size and the generalizability of the results (Hamilton and Huth, 2018) . Minnema and Herbelot (2019) perform experiments with additional data, which also lead to the conclusion that there is simply not enough training data available yet to learn a precise mapping. Furthermore, experimental results are commonly not validated on additional datasets to ensure a more robust evaluation (Beinborn et al., 2019) .",
"cite_spans": [
{
"start": 240,
"end": 261,
"text": "(Miezin et al., 2000)",
"ref_id": "BIBREF83"
},
{
"start": 578,
"end": 598,
"text": "(Shain et al., 2019)",
"ref_id": "BIBREF101"
},
{
"start": 761,
"end": 785,
"text": "Gauthier and Levy (2019)",
"ref_id": "BIBREF44"
},
{
"start": 872,
"end": 893,
"text": "Brennan et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 1135,
"end": 1158,
"text": "(Schwartz et al., 2019;",
"ref_id": "BIBREF100"
},
{
"start": 1159,
"end": 1178,
"text": "Abnar et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 1575,
"end": 1598,
"text": "(Aggarwal et al., 2001)",
"ref_id": "BIBREF2"
},
{
"start": 1802,
"end": 1829,
"text": "(Kriegeskorte et al., 2006)",
"ref_id": "BIBREF69"
},
{
"start": 1976,
"end": 1997,
"text": "LaConte et al. (2003)",
"ref_id": "BIBREF72"
},
{
"start": 2002,
"end": 2022,
"text": "Michel et al. (2011)",
"ref_id": "BIBREF82"
},
{
"start": 2198,
"end": 2223,
"text": "(Gauthier and Levy, 2019)",
"ref_id": "BIBREF44"
},
{
"start": 2386,
"end": 2406,
"text": "(Huang et al., 2017;",
"ref_id": "BIBREF61"
},
{
"start": 2407,
"end": 2428,
"text": "Rowtula et al., 2018)",
"ref_id": "BIBREF99"
},
{
"start": 2623,
"end": 2645,
"text": "(Brennan et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 2646,
"end": 2666,
"text": "Wehbe et al., 2014a)",
"ref_id": "BIBREF115"
},
{
"start": 2669,
"end": 2692,
"text": "Fedorenko et al. (2010)",
"ref_id": "BIBREF37"
},
{
"start": 2846,
"end": 2865,
"text": "Abnar et al. (2019)",
"ref_id": "BIBREF1"
},
{
"start": 3160,
"end": 3185,
"text": "(Hamilton and Huth, 2018)",
"ref_id": "BIBREF48"
},
{
"start": 3188,
"end": 3215,
"text": "Minnema and Herbelot (2019)",
"ref_id": "BIBREF84"
},
{
"start": 3499,
"end": 3522,
"text": "(Beinborn et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges in processing fMRI signals",
"sec_num": null
},
{
"text": "When we want to use cognitive signals to improve our computational models, we are facing multiple modeling decisions. In this section, we discuss the advantages and disadvantages of each recording modality of cognitive signal, the aspects to consider when choosing a dataset, as well as which features can be extracted from the cognitive data, and finally, how they can be included in machine learning models and how these should be evaluated. The decision of which type of signal to work with and which dataset to use depend strongly on the type of research questions that we would like to address. In this section, we provide some guidelines on how to approach these decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General challenges",
"sec_num": "3."
},
{
"text": "An important aspect to take into account when choosing a type of cognitive signal is the linguistic level on which the signals are required: from word level, over phrase and sentence level to discourse level. Due to the low temporal resolution and the hemodynamic lag of fMRI, it is more appropriate to use eye-tracking or EEG of MEG data to extract word-level signals in continuous stimuli. Moreover, if using multiple datasets from the same recording modality, it is crucial to ensure proper pre-processing has been conducted on the datasets, or to apply the same pre-processing steps to all datasets. Eye-tracking, as an indirect metric of cognitive load during the different stages of reading processing, has numerous advantages. It is an accessible method to record millisecondaccurate eye movements and has successfully been leveraged to improve a wide range of NLP tasks on different text processing levels (see Table 1 for an overview). While the improvements on precision and recall are modest, they are consistent across tasks. The impressive body of psycholinguistic research, a range of established metrics, and the intuitive linking from features to words speak in favour of using eye-tracking for NLP. EEG is another recording technique with very high temporal resolution (i.e. resulting in multiple samples per second). However, as the electrodes measure electrical activity at the surface of the brain -through the bone -it is difficult to know exactly in which brain region the signal originated. EEG signals have been used frequently for classification in brain-computer-interfaces (e.g., classifying text difficulty for speech recognition (Chen et al., 2012) ), but have rarely been used to improve NLP tasks (Hollenstein et al., 2019a) . Moreover, there are still many open questions regarding which EEG features are most appropriate, and not much EEG data from naturalistic reading is yet openly available. MEG, however, yields better temporal and spatial resolution, which makes is very suitable for NLP. Unfortunately not many MEG datasets from naturalistic studies are currently available. Finally, the fMRI signal exhibits opposite characteristics. Due to the precise 3D scans, the spatial resolution is very high; but, since it takes a few seconds to produce a scan over the full brain, the temporal resolution is very low. Recently, fMRI data has become popular in NLP to evaluate neural language models (e.g., Schwartz et al. (2019) ) and to improve word representations (Toneva and Wehbe, 2019) . It is useful to leverage fMRI signals if the localization of cognitive processes plays an important role and to investigate theories about specialized processing areas. Unfortunately fMRI scans are less accessible and more expensive. Evidently, human language processing recordings are very noisy. Therefore, if possible it is advisable to work with multiple datasets of the same modality, or to work with multiple modalities to achieve more robust results. It is insightful to run experiments on multiple cognitive datasets of the same modality. This ensures that the NLP models are not merely picking up on the noise in the cognitive data, but actually learning from language processing specific signals. For instance, Hollenstein and Zhang (2019) combine gaze feature from three corpora, and Mensch et al. 2017learn a shared representation across many fMRI datasets. Working with data from multiple modalities is also recommendable. For instance, Schwartz et al. 2019used both MEG and fMRI data to inform language representations, and were able to show how using both modalities simultaneously improves their predictions. Furthermore, Hollenstein et al. (2019b) presented a framework for cognitive word embedding evaluation, where embeddings are evaluated by predicting eye-tracking, EEG and fMRI signals from 15 different datasets. Their results show clear correlations between these three modalities. Barrett et al. (2018b) combined eye-tracking features with prosodic features, keystroke logs from different corpora, and pre-trained word embeddings for part-of-speech induction and chunking. Several methods were used to project the features into a shared feature space and canonical correlation analysis yielded the best results (Faruqui and Dyer, 2014) . Some studies provide data from multiple modalities recorded at different times on different subjects, but on the same stimulus: For example, the UCL corpus (Frank et al., 2013) contains self-paced reading times and eye-tracking data, and was later extended with EEG data (Frank et al., 2015) . Similarly, self-paced reading times and fMRI were recorded for the Natural Stories Corpus (Futrell et al., 2018; Shain et al., 2019) ; EEG and fMRI were recorded for the Alice corpus (Brennan et al., 2016; Hale et al., 2018) . For some sources, data from co-registration studies is available, which means two modalities were recorded simultaneously during the same experiment. This has become more popular, since all three modalities are complementary in terms of temporal and spatial resolution as well as the directness in the measurement of neural activity (Mulert, 2013) . Recent reports attest to the feasibility of co-registration studies for studying the neurobiology of nat-ural reading (see Kandylaki and Bornkessel-Schlesewsky (2019) for a review). For example, eye-tracking and EEG recorded concurrently during reading (Dimigen et al., 2011; Henderson et al., 2013; Hollenstein et al., 2018; Hollenstein et al., 2019c) and concurrent eye-tracking and fMRI (Henderson et al., 2015; Henderson et al., 2016) . Using data from co-registration studies in NLP allows for comparison on the same language stimuli, on the same population, and on the same language understanding task, where only the recording method differs. Finally, the presented recording modalities of cognitive signals in this paper are complementary to each other, the information provided by each modality adds to the full picture. Hence, whether co-registration studies are leveraged or simply data from multiple sources and multiple modalities, it is highly recommendable to test all experiments to improve NLP models on more than one dataset and/or modality.",
"cite_spans": [
{
"start": 1658,
"end": 1677,
"text": "(Chen et al., 2012)",
"ref_id": "BIBREF22"
},
{
"start": 1728,
"end": 1755,
"text": "(Hollenstein et al., 2019a)",
"ref_id": "BIBREF57"
},
{
"start": 2438,
"end": 2460,
"text": "Schwartz et al. (2019)",
"ref_id": "BIBREF100"
},
{
"start": 2499,
"end": 2523,
"text": "(Toneva and Wehbe, 2019)",
"ref_id": "BIBREF109"
},
{
"start": 3932,
"end": 3954,
"text": "Barrett et al. (2018b)",
"ref_id": "BIBREF11"
},
{
"start": 4262,
"end": 4286,
"text": "(Faruqui and Dyer, 2014)",
"ref_id": "BIBREF36"
},
{
"start": 4445,
"end": 4465,
"text": "(Frank et al., 2013)",
"ref_id": "BIBREF38"
},
{
"start": 4560,
"end": 4580,
"text": "(Frank et al., 2015)",
"ref_id": "BIBREF39"
},
{
"start": 4673,
"end": 4695,
"text": "(Futrell et al., 2018;",
"ref_id": "BIBREF41"
},
{
"start": 4696,
"end": 4715,
"text": "Shain et al., 2019)",
"ref_id": "BIBREF101"
},
{
"start": 4766,
"end": 4788,
"text": "(Brennan et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 4789,
"end": 4807,
"text": "Hale et al., 2018)",
"ref_id": "BIBREF47"
},
{
"start": 5143,
"end": 5157,
"text": "(Mulert, 2013)",
"ref_id": "BIBREF89"
},
{
"start": 5413,
"end": 5435,
"text": "(Dimigen et al., 2011;",
"ref_id": "BIBREF30"
},
{
"start": 5436,
"end": 5459,
"text": "Henderson et al., 2013;",
"ref_id": "BIBREF50"
},
{
"start": 5460,
"end": 5485,
"text": "Hollenstein et al., 2018;",
"ref_id": "BIBREF56"
},
{
"start": 5486,
"end": 5512,
"text": "Hollenstein et al., 2019c)",
"ref_id": "BIBREF59"
},
{
"start": 5550,
"end": 5574,
"text": "(Henderson et al., 2015;",
"ref_id": "BIBREF51"
},
{
"start": 5575,
"end": 5598,
"text": "Henderson et al., 2016)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [
{
"start": 919,
"end": 926,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Choosing the type of cognitive signals",
"sec_num": "3.1."
},
{
"text": "Datasets of human language processing signals should be chosen based on the research question. It is important to decide whether controlled experiments with clearly distinguishable conditions are required, for instance, if infrequent linguistic phenomena are of interest, or if natural stimuli are favorable to analyze real-world language (Hamilton and Huth, 2018) . As a example for controlled settings, Mitchell et al. (2008) recorded fMRI data from a isolated word stimuli of 60 concrete nouns. In reading studies, serial presentation of words has often been applied, where one word is presented at the time on the screen (e.g., Wehbe et al. (2014a) , Frank et al. (2015) ). In an EEG dataset provided by Broderick et al. (2018) , the participants also read sentences presented word-by-word. Half of the sentences ended with a congruent word and the other half with an incongruent word, so that the difference in the N400 components could be analyzed. This manipulation facilitates the processing and isolation of the cognitive signals, but it does not reflect processes of natural reading, in which the reader has access to full sentences or texts. Due to the different scopes in experimental research and NLP, it is seldom possible to directly draw conclusions concerning features from these studies to NLP: Speaking in broad terms, psycholinguistic and neurolinguistic studies provide evidence of human cognitive processing of text or speech primarily through controlled experiments. The experiment as well as the textual stimulus are carefully designed in order to isolate a specific cognitive process. Datadriven NLP works towards enabling computers to understand and manipulate naturally-occurring human language through machine learning models based on huge corpora. The phenomena that NLP models aim to model are typically much broader and less well-defined than what is examined in psycholinguistic studies. Recently, it has become more common to implement naturalistic reading experiments (Hamilton and Huth, 2018) . Naturalistic reading denotes self-paced reading of naturally-occurring text without any specific task or reading constraints, such as limiting the preview of the following words. This allows subjects to read at their own speed and results in different reading times between subjects, which calls for more elaborate pre-processing. Naturalistic reading studies diverge from tightly controlled experimental designs and allow the participants to read continuous stimuli, i.e. full sentences or paragraphs spanning multiple lines on the screen. In addition to the more natural setting, a big advantage is the possibility to study linguistic phenomena on different levels (e.g., phonemes, syllables, words, phrases, sentences, discourse), which unfold at different timescales in the same naturalistic stimulus such as a story. Moreover, naturalistic experimental designs, which use language within the rich context of stories, audiobooks, and dialogues, produce results which are more easily generalizable to everyday language use (Kandylaki and Bornkessel-Schlesewsky, 2019) . Since generalizability of results is one of the main objectives in experimental science, the potential importance of increased ecological validity in naturalistic experiment paradigms is undeniable. An example for the use of continuous, naturalistic stimuli is the dataset by Hollenstein et al. (2018) . They recorded eye-tracking and EEG signals of participants silently reading full real-world sentences. In Broderick et al. (2018) and Shain et al. (2019) subjects listen to full stories during EEG and fMRI recordings, respectively. In addition to the studies mentioned in this paper, a collection of openly available cognitive datasets useful for NLP in various languages can be found online. 2",
"cite_spans": [
{
"start": 339,
"end": 364,
"text": "(Hamilton and Huth, 2018)",
"ref_id": "BIBREF48"
},
{
"start": 405,
"end": 427,
"text": "Mitchell et al. (2008)",
"ref_id": "BIBREF88"
},
{
"start": 632,
"end": 652,
"text": "Wehbe et al. (2014a)",
"ref_id": "BIBREF115"
},
{
"start": 655,
"end": 674,
"text": "Frank et al. (2015)",
"ref_id": "BIBREF39"
},
{
"start": 708,
"end": 731,
"text": "Broderick et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 2002,
"end": 2027,
"text": "(Hamilton and Huth, 2018)",
"ref_id": "BIBREF48"
},
{
"start": 3056,
"end": 3100,
"text": "(Kandylaki and Bornkessel-Schlesewsky, 2019)",
"ref_id": "BIBREF64"
},
{
"start": 3379,
"end": 3404,
"text": "Hollenstein et al. (2018)",
"ref_id": "BIBREF56"
},
{
"start": 3513,
"end": 3536,
"text": "Broderick et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 3541,
"end": 3560,
"text": "Shain et al. (2019)",
"ref_id": "BIBREF101"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting a dataset",
"sec_num": "3.2."
},
{
"text": "The majority of research in NLP, as well as most of the available cognitive data sources is in English. However, it is well known that language processing between native and foreign language speakers differs in the active brain regions (Perani et al., 1996) . Moreover, second language learners exhibit different reading patterns than native speakers (Dussias, 2010). Eye-tracking and fMRI studies on bilingualism suggest that, although the same general structures are active for both languages, differences within these general structures are present across languages and across levels of processing (Marian et al., 2003; Dehghani et al., 2017) . In an effort to promote eye-tracking research of bilingual reading, Cop et al. (2017) provide an English-Dutch eye-tracking corpus tailored to analyze the bilingual reading process. Further, there are even differences in the processing of dialects and standard variations, e.g., Lundquist and Vangsnes (2018) for Norwegian dialects and Stocker and Hartmann (2019) for variations of German. Hence, it is not only important to take language-specific aspects into account in the NLP methods, but it is crucial to account for these differences in human language processing. It remains an open questions how many of the referenced studies in this paper would generalize to other languages.",
"cite_spans": [
{
"start": 236,
"end": 257,
"text": "(Perani et al., 1996)",
"ref_id": "BIBREF94"
},
{
"start": 601,
"end": 622,
"text": "(Marian et al., 2003;",
"ref_id": "BIBREF79"
},
{
"start": 623,
"end": 645,
"text": "Dehghani et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 716,
"end": 733,
"text": "Cop et al. (2017)",
"ref_id": "BIBREF24"
},
{
"start": 927,
"end": 956,
"text": "Lundquist and Vangsnes (2018)",
"ref_id": "BIBREF78"
},
{
"start": 984,
"end": 1011,
"text": "Stocker and Hartmann (2019)",
"ref_id": "BIBREF104"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual neurolinguistics",
"sec_num": null
},
{
"text": "This section covers different approaches to find the most meaningful features from human language processing recordings. NLP studies that leverage human gaze signals from reading mostly use a broad range of established features, encompassing both early and late measures of cognitive processing. These features are then used in machine learning systems to learn patterns. Barrett et al. (2016a) use 22 features for part-of-speech induction, Hollenstein and Zhang (2019) use 17 features for named entity recognition, and Strzyz et al. (2019) use 12 features for dependency parsing. Studies that systematically test different combinations of features, generally reveal that using a broad range of established features, such as first, mean and total fixation duration, yield the largest improvements (Barrett et al., 2016a; Yaneva et al., 2018; Hollenstein and Zhang, 2019; Rohanian et al., 2017) . Most studies combine linguistic features with gaze features (e.g., Rohanian et al. (2017) and Yaneva et al. (2018) ). Further, Barrett et al. (2016a) use word frequency and word length features in combination with eye-tracking features, because the two properties explain much of the variance in fixation duration (Just and Carpenter, 1980; Levy, 2008) . Results by Demberg and Keller (2008) and Lopopolo et al. (2019) showed a relation between regression features and the syntactic structure of sentences: About 40% of regressions land on target words engaged in dependency relations. Moreover, many other properties such as transitional probabilities or age of acquisition could also be used. In Hollenstein and Zhang (2019) and Barrett et al. (2018b) , gaze features are combined with pre-trained word embeddings to improve performance. All these works, however, rely on rather heavy feature engineering. Contrariwise, these features can also be predicted from text: Hahn and Keller (2016) presented an unsupervised neural model of human reading by predicting the fixations within sentences. Similarly, Matthies and S\u00f8gaard (2013) predict skipping probabilities across multiple readers. Moreover, Singh et al. (2016) introduced a method where eye movements are learned in order to alleviate the need to get the task data annotated with eye movements. A similar approach is also used by Long et al. (2019) . Comparably, fMRI signals have been predicted from language model representations, e.g., Rodrigues et al. (2018) and Abnar et al. (2018) . In general, feature engineering for M/EEG and fMRI data is more a matter of dimensionality reduction. For instance, most studies leveraging M/EEG data for NLP average the signals over all electrodes or sensors (e.g., Wehbe et al. (2014b) ). Moreover, methods such as principal component analysis are often used to reduce the dimensions of both M/EEG and fMRI data. In the case of fMRI data, we mention several strategies for voxel selection in Section 2.3. to reduce the number of dimensions. For M/EEG signals, it is also possible to work with frequency band features or ERPs based on neurolinguistic findings (see Section 2.2.). However, these features have not yet been explored in detail to improve NLP tasks.",
"cite_spans": [
{
"start": 372,
"end": 394,
"text": "Barrett et al. (2016a)",
"ref_id": "BIBREF8"
},
{
"start": 441,
"end": 469,
"text": "Hollenstein and Zhang (2019)",
"ref_id": "BIBREF55"
},
{
"start": 520,
"end": 540,
"text": "Strzyz et al. (2019)",
"ref_id": "BIBREF105"
},
{
"start": 797,
"end": 820,
"text": "(Barrett et al., 2016a;",
"ref_id": "BIBREF8"
},
{
"start": 821,
"end": 841,
"text": "Yaneva et al., 2018;",
"ref_id": "BIBREF121"
},
{
"start": 842,
"end": 870,
"text": "Hollenstein and Zhang, 2019;",
"ref_id": "BIBREF55"
},
{
"start": 871,
"end": 893,
"text": "Rohanian et al., 2017)",
"ref_id": "BIBREF98"
},
{
"start": 963,
"end": 985,
"text": "Rohanian et al. (2017)",
"ref_id": "BIBREF98"
},
{
"start": 990,
"end": 1010,
"text": "Yaneva et al. (2018)",
"ref_id": "BIBREF121"
},
{
"start": 1023,
"end": 1045,
"text": "Barrett et al. (2016a)",
"ref_id": "BIBREF8"
},
{
"start": 1210,
"end": 1236,
"text": "(Just and Carpenter, 1980;",
"ref_id": "BIBREF63"
},
{
"start": 1237,
"end": 1248,
"text": "Levy, 2008)",
"ref_id": "BIBREF73"
},
{
"start": 1262,
"end": 1287,
"text": "Demberg and Keller (2008)",
"ref_id": "BIBREF28"
},
{
"start": 1292,
"end": 1314,
"text": "Lopopolo et al. (2019)",
"ref_id": "BIBREF77"
},
{
"start": 1627,
"end": 1649,
"text": "Barrett et al. (2018b)",
"ref_id": "BIBREF11"
},
{
"start": 2002,
"end": 2029,
"text": "Matthies and S\u00f8gaard (2013)",
"ref_id": "BIBREF80"
},
{
"start": 2096,
"end": 2115,
"text": "Singh et al. (2016)",
"ref_id": "BIBREF102"
},
{
"start": 2285,
"end": 2303,
"text": "Long et al. (2019)",
"ref_id": "BIBREF76"
},
{
"start": 2394,
"end": 2417,
"text": "Rodrigues et al. (2018)",
"ref_id": "BIBREF97"
},
{
"start": 2422,
"end": 2441,
"text": "Abnar et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 2661,
"end": 2681,
"text": "Wehbe et al. (2014b)",
"ref_id": "BIBREF116"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting features",
"sec_num": "3.3."
},
{
"text": "Controlled psycholinguistic studies include multiple subjects to obtain significant differences considering the effect sizes of interest (Vasishth et al., 2018) . In many NLP studies that use eye movements as word representations, eye movement metrics are averaged over several readers arguing for more stability and less noise, but most studies are limited by number of words and readers in the provided corpora (Rohanian et al., 2017; Yaneva et al., 2018; Mishra et al., 2017b; Hollenstein et al., 2019a) . But how many subjects are required to obtain a robust average signal for NLP? Gaze annotation can never be a gold annotation, irrespective of the number of readers. It is intrinsically noisy and there is no uniquely correct reading pattern. Skilled readers will exhibit a more idiosyncratic reading behaviour under similar conditions. Language learners or readers with reading impairments will exhibit a noisier signal, that is difficult to use in NLP (Bingel et al., 2018) . Takmaz et al. (2019) compared aggregated gaze features and sequential features for generating image captions. Hollenstein et al. (2019a) used eye movement and EEG features to improve named entity recognition, relation classification and sentiment classification. They showed that averaging over ten skilled native readers is able to diminish the noise and variability between subjects, to the extent where the average worked almost as good as the best individual reader, for both gaze and EEG models. While subject variability is even larger in fMRI signals, averaging over participants can help to avoid overfitting (Bingel et al., 2016) . Moreover, Schwartz et al. (2019) showed how a language model fine-tuned with fMRI brain activity data transfers across multiple participants.",
"cite_spans": [
{
"start": 137,
"end": 160,
"text": "(Vasishth et al., 2018)",
"ref_id": "BIBREF110"
},
{
"start": 413,
"end": 436,
"text": "(Rohanian et al., 2017;",
"ref_id": "BIBREF98"
},
{
"start": 437,
"end": 457,
"text": "Yaneva et al., 2018;",
"ref_id": "BIBREF121"
},
{
"start": 458,
"end": 479,
"text": "Mishra et al., 2017b;",
"ref_id": "BIBREF87"
},
{
"start": 480,
"end": 506,
"text": "Hollenstein et al., 2019a)",
"ref_id": "BIBREF57"
},
{
"start": 961,
"end": 982,
"text": "(Bingel et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 985,
"end": 1005,
"text": "Takmaz et al. (2019)",
"ref_id": "BIBREF107"
},
{
"start": 1095,
"end": 1121,
"text": "Hollenstein et al. (2019a)",
"ref_id": "BIBREF57"
},
{
"start": 1602,
"end": 1623,
"text": "(Bingel et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 1636,
"end": 1658,
"text": "Schwartz et al. (2019)",
"ref_id": "BIBREF100"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Aggregating features",
"sec_num": null
},
{
"text": "In some studies, averages of gaze features over word types have been used to alleviate the need of having gaze data at test time, and even achieved better results than tokenlevel features (Barrett et al., 2016a; Hollenstein and Zhang, 2019) . Klerke and Plank (2019) analyzed this in detail for PoS tagging and found that content words are especially sensitive to type-level gaze features. For recordings of continuous stimuli, the EEG samples have to be mapped to the points in time where a word (or phrase) was heard or read. Hauk and Pulverm\u00fcller (2004) presented evidence that lexical access from written word stimuli is an early process that follows stimulus presentation by less than 200 ms. Between 200-500ms, the word's semantic properties are processed (Wehbe et al., 2014b) . Moreover, Dimigen et al. (2011) studied the linguistic effects of eye movements and EEG signal co-registration in natural reading and showed that they accurately represent lexical processing. This suggest that, in the case of reading, the brain processes words when they are fixated for the first time, so that by mapping the EEG samples to the corresponding reading times it is possible to extract wordlevel EEG features. In combination with the eye-tracking, the high sampling rate of EEG allows us to get a definable signal for each token. In case of listening, the EEG signals can simply be mapped to the timestamps of the utterances. Analogous to the type aggregation approach described for eye-tracking signals, token-level EEG and fMRI features can be aggregated on word type level (Hollenstein et al., 2019a; Bingel et al., 2016) . This eliminates the need of recorded data at test time, however the results are more promising for eye-tracking data than for brain activity. In the case of fMRI, however, extracting token-level or type-level signals from continuous stimuli is less recommendable. A few studies have extracted token-level features from scans of a few seconds of duration. Bingel et al. (2016) computed individual word features for PoS induction by accounting for the hemodynamic delay using a Gaussian sliding window over a certain time window. Hollenstein et al. (2019b) also account for this delay when extraction word-level features, and then average the word features over multiple trials from different contexts. It is difficult to quantify how much of the information of single word processing is captured in these signals. In fMRI studies, models are most often trained separately for each subject due to the large individual differences. It is, however, also possible to learn a shared representation between subjects (Vodrahalli et al., 2018) . Additionally, the signal can be averaged if multiple trials are available per stimulus as in Mitchell et al. (2008) .",
"cite_spans": [
{
"start": 188,
"end": 211,
"text": "(Barrett et al., 2016a;",
"ref_id": "BIBREF8"
},
{
"start": 212,
"end": 240,
"text": "Hollenstein and Zhang, 2019)",
"ref_id": "BIBREF55"
},
{
"start": 243,
"end": 266,
"text": "Klerke and Plank (2019)",
"ref_id": "BIBREF65"
},
{
"start": 528,
"end": 556,
"text": "Hauk and Pulverm\u00fcller (2004)",
"ref_id": "BIBREF49"
},
{
"start": 762,
"end": 783,
"text": "(Wehbe et al., 2014b)",
"ref_id": "BIBREF116"
},
{
"start": 796,
"end": 817,
"text": "Dimigen et al. (2011)",
"ref_id": "BIBREF30"
},
{
"start": 1575,
"end": 1602,
"text": "(Hollenstein et al., 2019a;",
"ref_id": "BIBREF57"
},
{
"start": 1603,
"end": 1623,
"text": "Bingel et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 1981,
"end": 2001,
"text": "Bingel et al. (2016)",
"ref_id": "BIBREF14"
},
{
"start": 2635,
"end": 2660,
"text": "(Vodrahalli et al., 2018)",
"ref_id": "BIBREF111"
},
{
"start": 2756,
"end": 2778,
"text": "Mitchell et al. (2008)",
"ref_id": "BIBREF88"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level signals",
"sec_num": null
},
{
"text": "This section describes the most common machine learning methods for leveraging human cognitive processing for NLP. In most applications of systems using human data, it is sub-optimal to require real-time human features at test time. For eye-tracking, there are several studies working towards not requiring recordings during inference. We start by outlining those methods and move to other cognitive signals thereafter. When using human language processing data recorded from continuous stimuli, it is intuitive to implement sequence labelling or sequence classification approaches. For instance, Strzyz et al. (2019) argue in favor of using bidirectional LSTMs for predicting eye-movement information. Many of other studies have leveraged similar neural architectures, for example, Klerke et al. (2016) and Hollenstein and Zhang (2019) . A basic approach is to include cognitive features as multidimensional vectors to represent each word, possibly along with other word-based features. For instace, Rohanian et al. (2017) , Barrett and S\u00f8gaard (2015) and Yaneva et al. (2018) implemented this approach for eye-tracking data. However, this requires gaze data at test time. Barrett et al. (2016a) and Barrett et al. (2016b) showed that wordtype averages of gaze features yielded better results for PoS induction than token-level features. In this case, gaze representations are used similarly to word embeddings, with which they can also be combined (Barrett et al., 2018b) . Klerke and Plank (2019) analyzed this in detail for PoS tagging and showed that word type variance was better than individual gaze representations and less aggregated gaze features. Additionally, Hollenstein and Zhang (2019) showed the same advantages of type-level aggregated features for improving named entity recognition on corpora with no available gaze features during training and testing. However, type aggregation on EEG data has not shown the same positive benefits (Hollenstein et al., 2019a) . Concatenating cognitive features has also been tested with brain activity data. Bingel et al. (2016) concatenate ex-tracted fMRI vectors from multiple subjects with linguistic features. Moreover, Schwartz et al. (2019) include fMRI and MEG data to augment a language model by fine-tuning a model trained on textual input with brain activity signals. In addition, multi-task learning is a method of training a system that inherently does not need human data on the test set. Multi-task learning studies typically use only one feature, but that is most likely due to constraints in the model architecture, i.e. an increasing number of parameters leading to longer training times. Hollenstein et al. (2019a) trained multi-task learning models to learn eye-tracking and EEG features at the same time as NLP tasks such as sentiment analysis and relation detection. Multi-task learning has also been successful when generalizing across subjects from EEG data, for applications such as brain-computer interfaces (Alamgir et al., 2010) . Leveraging eye-tracking data, Gonz\u00e1lez-Gardu\u00f1o and S\u00f8gaard (2017), Klerke et al. (2016) and Klerke and Plank (2019) employ a multi-task learning setup for text compression, readability prediction, and syntactic tagging, respectively, while also learning to predict a gaze feature as an auxiliary task. Lastly, another related option is to regularise the attention of a recurrent neural network with human data for sequence classification. Attention weights influence the relative importance of each word on the model, but require large amounts of data to be trained. Barrett et al. (2018a) used sentences from the main dataset to update the model parameters, while sentences from a smaller, non-overlapping eye-tracking corpus were used to only train the attention function. Regularising the attention function could also be done using other human measures such as EEG.",
"cite_spans": [
{
"start": 597,
"end": 617,
"text": "Strzyz et al. (2019)",
"ref_id": "BIBREF105"
},
{
"start": 783,
"end": 803,
"text": "Klerke et al. (2016)",
"ref_id": "BIBREF66"
},
{
"start": 824,
"end": 836,
"text": "Zhang (2019)",
"ref_id": "BIBREF55"
},
{
"start": 1001,
"end": 1023,
"text": "Rohanian et al. (2017)",
"ref_id": "BIBREF98"
},
{
"start": 1026,
"end": 1052,
"text": "Barrett and S\u00f8gaard (2015)",
"ref_id": "BIBREF7"
},
{
"start": 1057,
"end": 1077,
"text": "Yaneva et al. (2018)",
"ref_id": "BIBREF121"
},
{
"start": 1174,
"end": 1196,
"text": "Barrett et al. (2016a)",
"ref_id": "BIBREF8"
},
{
"start": 1201,
"end": 1223,
"text": "Barrett et al. (2016b)",
"ref_id": "BIBREF9"
},
{
"start": 1450,
"end": 1473,
"text": "(Barrett et al., 2018b)",
"ref_id": "BIBREF11"
},
{
"start": 1476,
"end": 1499,
"text": "Klerke and Plank (2019)",
"ref_id": "BIBREF65"
},
{
"start": 1672,
"end": 1700,
"text": "Hollenstein and Zhang (2019)",
"ref_id": "BIBREF55"
},
{
"start": 1952,
"end": 1979,
"text": "(Hollenstein et al., 2019a)",
"ref_id": "BIBREF57"
},
{
"start": 2178,
"end": 2200,
"text": "Schwartz et al. (2019)",
"ref_id": "BIBREF100"
},
{
"start": 2660,
"end": 2686,
"text": "Hollenstein et al. (2019a)",
"ref_id": "BIBREF57"
},
{
"start": 2987,
"end": 3009,
"text": "(Alamgir et al., 2010)",
"ref_id": null
},
{
"start": 3079,
"end": 3099,
"text": "Klerke et al. (2016)",
"ref_id": "BIBREF66"
},
{
"start": 3104,
"end": 3127,
"text": "Klerke and Plank (2019)",
"ref_id": "BIBREF65"
},
{
"start": 3579,
"end": 3601,
"text": "Barrett et al. (2018a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Including the features in the models",
"sec_num": "3.4."
},
{
"text": "On one hand, natural language understanding models are mostly optimized for performance on specific tasks and typically do not transfer well to other tasks or even other datasets (Talman and Chatzikyriakidis, 2019) . On the other hand, cognitive signals are typically constrained to their experimental design and stimuli. These discrepancies may lead to limitations in the possible improvements when leveraging cognitive signals to enhance NLP models. Indeed, the improvements achieved with cognitive signals are often modest. Therefore, we want to highlight the importance of robust baselines and proper significance testing. Examples of strong baselines are, for instance, word frequency for eye-tracking signals to ensure that the cognitive features add more to the model than purely lexical aspects; or comparing EEG and fMRI feature vectors to random vectors to guarantee that the cognitive features contain more than added dimensions of noise. Additionally, after achieving better results than strong baseline models, one needs to ascertain that the improvements are not due to some artifacts in the cognitive data. Hence, it is vital to perform suitable significance tests, such as permutation tests (Dror et al., 2018) . Furthermore, Gauthier and Ivanova (2018) propose three highly sensible strategies for making language decoding studies from brain activity more interpretable: (1) committing to a specific mechanism and task, which would help to distinctly link brain activity features to specific NLP tasks,",
"cite_spans": [
{
"start": 179,
"end": 214,
"text": "(Talman and Chatzikyriakidis, 2019)",
"ref_id": "BIBREF108"
},
{
"start": 1207,
"end": 1226,
"text": "(Dror et al., 2018)",
"ref_id": "BIBREF31"
},
{
"start": 1242,
"end": 1269,
"text": "Gauthier and Ivanova (2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring improvements",
"sec_num": "3.5."
},
{
"text": "(2) dividing the input feature space into subsets that capture representations optimized for a particular task, and (3) explicitly measuring explained variance to evaluate the extent to which each model component explain the overall brain responses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measuring improvements",
"sec_num": "3.5."
},
{
"text": "To conclude this paper, we address some of the ethical considerations that arise when working with human language processing signals for NLP. As researchers in this area, we mostly make use of existing datasets that have been collected by psychology researchers. Nevertheless, the following ethical aspects should be taken into account. First, we want to highlight the necessity of considering the high-level consequences of our work. It becomes increasingly relevant to examine the implications of the interaction between humans and machines, between what can be recorded from a human brain and what can be extracted from those signals. What is the potential of the derived results? What is the objective of the final application? What is the impact on people and society? Suster et al. (2017) describe this aspect as the dual use of data: Applications leveraging cognitive cues for improving NLP (and many other machine learning applications) have the potential to be applied in both beneficial and harmful ways. Second, it is essential to remember the responsibility towards research subjects and towards protecting the individual (Suster et al., 2017) . All collected data comes from humans willing to share their brain activity for research. Hence, the participants as well as their data should be treated respectfully, even if as NLP practitioners we are leveraging provided data and not recording it ourselves. Although the data is anonymized after recording, we should refrain from drawing inferences from our models back to single participants. Finally, the origins of the data and any biases within them should be considered. Most psychological studies are based on Western, educated, industrialized, rich, and democratic research participants (so-called WEIRD, Henrich et al. (2010) ). By assuming that human nature is so universal that findings on this group would translate to all other demographics, this has led to a heavily biased collection of psychological data. The potential consequences of exclusion or demographic misrepresentation should not be ignored (Hovy and Spruit, 2016) . One step further, Caliskan et al. (2017) showed that text corpora contain recoverable and accurate imprints of our historic biases. These biases can be extracted from text, and are also reflected in eye movements and brain activity recordings (Wu et al., 2012; Herlitz and Lov\u00e9n, 2013; Fabi and Leuthold, 2018) . Thus, it is very important to remember that with extensive reuse of the same corpora these biases -participant sampling as well as experimental biases -are propagated to many experiments, and researchers should be careful in the interpretation of the results.",
"cite_spans": [
{
"start": 774,
"end": 794,
"text": "Suster et al. (2017)",
"ref_id": "BIBREF106"
},
{
"start": 1134,
"end": 1155,
"text": "(Suster et al., 2017)",
"ref_id": "BIBREF106"
},
{
"start": 1772,
"end": 1793,
"text": "Henrich et al. (2010)",
"ref_id": "BIBREF53"
},
{
"start": 2076,
"end": 2099,
"text": "(Hovy and Spruit, 2016)",
"ref_id": "BIBREF60"
},
{
"start": 2120,
"end": 2142,
"text": "Caliskan et al. (2017)",
"ref_id": "BIBREF21"
},
{
"start": 2345,
"end": 2362,
"text": "(Wu et al., 2012;",
"ref_id": "BIBREF119"
},
{
"start": 2363,
"end": 2387,
"text": "Herlitz and Lov\u00e9n, 2013;",
"ref_id": "BIBREF54"
},
{
"start": 2388,
"end": 2412,
"text": "Fabi and Leuthold, 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical considerations",
"sec_num": "4."
},
{
"text": "Not all of these datasets are publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/norahollenstein/cognitiveNLP-dataCollection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Author L. Beinborn was funded by the Netherlands Organisation for Scientific Research, through a Gravitation Grant 024.001.006 to the Language in Interaction Consortium.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Experiential, distributional and dependencybased word embeddings have complementary roles in decoding brain activity",
"authors": [
{
"first": "S",
"middle": [],
"last": "Abnar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mijnheer",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)",
"volume": "",
"issue": "",
"pages": "57--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abnar, S., Ahmed, R., Mijnheer, M., and Zuidema, W. (2018). Experiential, distributional and dependency- based word embeddings have complementary roles in decoding brain activity. In Proceedings of the 8th Work- shop on Cognitive Modeling and Computational Linguis- tics (CMCL 2018), pages 57-66.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Blackbox meets blackbox: Representational similarity & stability analysis of neural language models and brains",
"authors": [
{
"first": "S",
"middle": [],
"last": "Abnar",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Beinborn",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Choenni",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "191--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abnar, S., Beinborn, L., Choenni, R., and Zuidema, W. (2019). Blackbox meets blackbox: Representational similarity & stability analysis of neural language mod- els and brains. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 191-203.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On the surprising behavior of distance metrics in high dimensional space",
"authors": [
{
"first": "C",
"middle": [
"C"
],
"last": "Aggarwal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hinneburg",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Keim",
"suffix": ""
}
],
"year": 2001,
"venue": "Database Theory -ICDT 2001",
"volume": "",
"issue": "",
"pages": "420--434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aggarwal, C. C., Hinneburg, A., and Keim, D. A. (2001). On the surprising behavior of distance metrics in high di- mensional space. In Jan Van den Bussche et al., editors, Database Theory -ICDT 2001, pages 420-434, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multitask learning for brain-computer interfaces",
"authors": [],
"year": null,
"venue": "Proceedings of the thirteenth international conference on artificial intelligence and statistics",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Multitask learning for brain-computer interfaces. In Pro- ceedings of the thirteenth international conference on ar- tificial intelligence and statistics, pages 17-24.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "M/EEG analysis of naturalistic stories: a review from speech to language processing. Language",
"authors": [
{
"first": "P",
"middle": [
"M"
],
"last": "Alday",
"suffix": ""
}
],
"year": 2019,
"venue": "Cognition and Neuroscience",
"volume": "34",
"issue": "4",
"pages": "457--473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alday, P. M. (2019). M/EEG analysis of naturalistic sto- ries: a review from speech to language processing. Lan- guage, Cognition and Neuroscience, 34(4):457-473.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Visually grounded and textual semantic models differentially decode brain activity associated with concrete and abstract nouns",
"authors": [
{
"first": "A",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "17--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anderson, A. J., Kiela, D., Clark, S., and Poesio, M. (2017). Visually grounded and textual semantic models differentially decode brain activity associated with con- crete and abstract nouns. Transactions of the Association for Computational Linguistics, 5:17-30.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Reading behavior predicts syntactic categories",
"authors": [
{
"first": "M",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the nineteenth conference on computational natural language learning",
"volume": "",
"issue": "",
"pages": "345--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barrett, M. and S\u00f8gaard, A. (2015). Reading behavior pre- dicts syntactic categories. In Proceedings of the nine- teenth conference on computational natural language learning, pages 345-249.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Weakly supervised part-of-speech tagging using eyetracking data",
"authors": [
{
"first": "M",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bingel",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "579--584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barrett, M., Bingel, J., Keller, F., and S\u00f8gaard, A. (2016a). Weakly supervised part-of-speech tagging using eye- tracking data. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics, volume 2, pages 579-584.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Crosslingual transfer of correlations between parts of speech and gaze features",
"authors": [
{
"first": "M",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1330--1339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barrett, M., Keller, F., and S\u00f8gaard, A. (2016b). Cross- lingual transfer of correlations between parts of speech and gaze features. In Proceedings of the 26th Interna- tional Conference on Computational Linguistics: Tech- nical Papers, pages 1330-1339.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sequence classification with human attention",
"authors": [
{
"first": "M",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bingel",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "302--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barrett, M., Bingel, J., Hollenstein, N., Rei, M., and S\u00f8gaard, A. (2018a). Sequence classification with hu- man attention. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 302- 312.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unsupervised induction of linguistic categories with records of reading, speaking, and writing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "A",
"middle": [
"V"
],
"last": "Gonz\u00e1lez-Gardu\u00f1o",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Frermann",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2028--2038",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barrett, M., Gonz\u00e1lez-Gardu\u00f1o, A. V., Frermann, L., and S\u00f8gaard, A. (2018b). Unsupervised induction of lin- guistic categories with records of reading, speaking, and writing. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2028-2038.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Robust evaluation of language-brain encoding experiments",
"authors": [
{
"first": "L",
"middle": [],
"last": "Beinborn",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Abnar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Choenni",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.02547"
]
},
"num": null,
"urls": [],
"raw_text": "Beinborn, L., Abnar, S., and Choenni, R. (2019). Ro- bust evaluation of language-brain encoding experiments. arXiv preprint arXiv:1904.02547.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Predicting native language from gaze",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Berzak",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Flynn",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "541--551",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berzak, Y., Nakamura, C., Flynn, S., and Katz, B. (2017). Predicting native language from gaze. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 541- 551.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Extracting token-level signals of syntactic processing from fMRIwith an application to PoS induction",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bingel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "747--755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bingel, J., Barrett, M., and S\u00f8gaard, A. (2016). Extracting token-level signals of syntactic processing from fMRI - with an application to PoS induction. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), volume 1, pages 747-755. ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Predicting misreadings from gaze in children with reading difficulties",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bingel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Klerke",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the thirteenth workshop on innovative use of NLP for building educational applications",
"volume": "",
"issue": "",
"pages": "24--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bingel, J., Barrett, M., and Klerke, S. (2018). Predicting misreadings from gaze in children with reading difficul- ties. In Proceedings of the thirteenth workshop on inno- vative use of NLP for building educational applications, pages 24-34.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Abstract linguistic structure correlates with temporal activity during naturalistic comprehension",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Brennan",
"suffix": ""
},
{
"first": "E",
"middle": [
"P"
],
"last": "Stabler",
"suffix": ""
},
{
"first": "S",
"middle": [
"E"
],
"last": "Van Wagenen",
"suffix": ""
},
{
"first": "W.-M",
"middle": [],
"last": "Luh",
"suffix": ""
},
{
"first": "J",
"middle": [
"T"
],
"last": "Hale",
"suffix": ""
}
],
"year": 2016,
"venue": "Brain and Language",
"volume": "157",
"issue": "",
"pages": "81--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brennan, J. R., Stabler, E. P., Van Wagenen, S. E., Luh, W.-M., and Hale, J. T. (2016). Abstract linguistic struc- ture correlates with temporal activity during naturalistic comprehension. Brain and Language, 157:81-94.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Naturalistic sentence comprehension in the brain",
"authors": [
{
"first": "J",
"middle": [],
"last": "Brennan",
"suffix": ""
}
],
"year": 2016,
"venue": "Language and Linguistics Compass",
"volume": "10",
"issue": "7",
"pages": "299--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brennan, J. (2016). Naturalistic sentence comprehen- sion in the brain. Language and Linguistics Compass, 10(7):299-313.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Electrophysiological correlates of semantic dissimilarity reflect the comprehension of natural, narrative speech",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Broderick",
"suffix": ""
},
{
"first": "A",
"middle": [
"J"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "G",
"middle": [
"M"
],
"last": "Di Liberto",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Crosse",
"suffix": ""
},
{
"first": "E",
"middle": [
"C"
],
"last": "Lalor",
"suffix": ""
}
],
"year": 2018,
"venue": "Current Biology",
"volume": "28",
"issue": "5",
"pages": "803--809",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Broderick, M. P., Anderson, A. J., Di Liberto, G. M., Crosse, M. J., and Lalor, E. C. (2018). Electrophysi- ological correlates of semantic dissimilarity reflect the comprehension of natural, narrative speech. Current Bi- ology, 28(5):803-809.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A neurocomputational model of the n400 and the p600 in language processing",
"authors": [
{
"first": "H",
"middle": [],
"last": "Brouwer",
"suffix": ""
},
{
"first": "M",
"middle": [
"W"
],
"last": "Crocker",
"suffix": ""
},
{
"first": "N",
"middle": [
"J"
],
"last": "Venhuizen",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Hoeks",
"suffix": ""
}
],
"year": 2017,
"venue": "Cognitive science",
"volume": "41",
"issue": "",
"pages": "1318--1352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brouwer, H., Crocker, M. W., Venhuizen, N. J., and Hoeks, J. C. (2017). A neurocomputational model of the n400 and the p600 in language processing. Cognitive science, 41:1318-1352.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Speaking, seeing, understanding: Correlating semantic models with conceptual representation in the brain",
"authors": [
{
"first": "L",
"middle": [],
"last": "Bulat",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1081--1091",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bulat, L., Clark, S., and Shutova, E. (2017). Speaking, see- ing, understanding: Correlating semantic models with conceptual representation in the brain. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 1081-1091. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "A",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "J",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caliskan, A., Bryson, J. J., and Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183- 186.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Towards using EEG to improve ASR accuracy",
"authors": [
{
"first": "Y.-N",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "K.-M",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mostow",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "382--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, Y.-N., Chang, K.-M., and Mostow, J. (2012). To- wards using EEG to improve ASR accuracy. In Pro- ceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, pages 382-385. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Leveraging annotators' gaze behaviour for coreference resolution",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cheri",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 7th Workshop on Cognitive Aspects of Computational Language Learning",
"volume": "",
"issue": "",
"pages": "22--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheri, J., Mishra, A., and Bhattacharyya, P. (2016). Lever- aging annotators' gaze behaviour for coreference resolu- tion. In Proceedings of the 7th Workshop on Cognitive Aspects of Computational Language Learning, pages 22-26.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Presenting GECO: An eyetracking corpus of monolingual and bilingual sentence reading",
"authors": [
{
"first": "U",
"middle": [],
"last": "Cop",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Dirix",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Drieghe",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Duyck",
"suffix": ""
}
],
"year": 2017,
"venue": "Behavior Research Methods",
"volume": "49",
"issue": "2",
"pages": "602--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cop, U., Dirix, N., Drieghe, D., and Duyck, W. (2017). Presenting GECO: An eyetracking corpus of monolin- gual and bilingual sentence reading. Behavior Research Methods, 49(2):602-615.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Deep learning for electroencephalogram (EEG) classification tasks: a review",
"authors": [
{
"first": "A",
"middle": [],
"last": "Craik",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Contreras-Vidal",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of neural engineering",
"volume": "16",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Craik, A., He, Y., and Contreras-Vidal, J. L. (2019). Deep learning for electroencephalogram (EEG) classifi- cation tasks: a review. Journal of neural engineering, 16(3):031001.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Is the low-cost EyeTribe eye tracker any good for research?",
"authors": [
{
"first": "E",
"middle": [],
"last": "Dalmaijer",
"suffix": ""
}
],
"year": 2014,
"venue": "PeerJ PrePrints",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dalmaijer, E. (2014). Is the low-cost EyeTribe eye tracker any good for research? Technical report, PeerJ PrePrints.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Decoding the neural representation of story meanings across languages",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dehghani",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Boghrati",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Man",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hoover",
"suffix": ""
},
{
"first": "S",
"middle": [
"I"
],
"last": "Gimbel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Zevin",
"suffix": ""
},
{
"first": "M",
"middle": [
"H"
],
"last": "Immordino-Yang",
"suffix": ""
},
{
"first": "A",
"middle": [
"S"
],
"last": "Gordon",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Damasio",
"suffix": ""
}
],
"year": 2017,
"venue": "Human brain mapping",
"volume": "38",
"issue": "12",
"pages": "6096--6106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dehghani, M., Boghrati, R., Man, K., Hoover, J., Gimbel, S. I., Vaswani, A., Zevin, J. D., Immordino-Yang, M. H., Gordon, A. S., Damasio, A., et al. (2017). Decoding the neural representation of story meanings across lan- guages. Human brain mapping, 38(12):6096-6106.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Data from eye-tracking corpora as evidence for theories of syntactic processing complexity",
"authors": [
{
"first": "V",
"middle": [],
"last": "Demberg",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2008,
"venue": "Cognition",
"volume": "109",
"issue": "2",
"pages": "193--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Demberg, V. and Keller, F. (2008). Data from eye-tracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2):193-210.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Pa- pers), pages 4171-4186.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Coregistration of eye movements and EEG in natural reading: analyses and review",
"authors": [
{
"first": "O",
"middle": [],
"last": "Dimigen",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Sommer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hohlfeld",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Jacobs",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kliegl",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Experimental Psychology: General",
"volume": "140",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimigen, O., Sommer, W., Hohlfeld, A., Jacobs, A. M., and Kliegl, R. (2011). Coregistration of eye movements and EEG in natural reading: analyses and review. Journal of Experimental Psychology: General, 140(4):552.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The hitchhiker's guide to testing statistical significance in natural language processing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Baumer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shlomov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1383--1392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dror, R., Baumer, G., Shlomov, S., and Reichart, R. (2018). The hitchhiker's guide to testing statistical sig- nificance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1383-1392.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Uses of eye-tracking data in second language sentence processing research",
"authors": [
{
"first": "P",
"middle": [
"E"
],
"last": "Dussias",
"suffix": ""
}
],
"year": 2010,
"venue": "Annual Review of Applied Linguistics",
"volume": "30",
"issue": "",
"pages": "149--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dussias, P. E. (2010). Uses of eye-tracking data in second language sentence processing research. Annual Review of Applied Linguistics, 30:149-166.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Unfold: an integrated toolbox for overlap correction, non-linear modeling, and regression-based eeg analysis",
"authors": [
{
"first": "B",
"middle": [
"V"
],
"last": "Ehinger",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Dimigen",
"suffix": ""
}
],
"year": 2019,
"venue": "PeerJ",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehinger, B. V. and Dimigen, O. (2019). Unfold: an inte- grated toolbox for overlap correction, non-linear model- ing, and regression-based eeg analysis. PeerJ, 7:e7838.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Predicting term-relevance from brain signals",
"authors": [
{
"first": "M",
"middle": [
"J"
],
"last": "Eugster",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ruotsalo",
"suffix": ""
},
{
"first": "M",
"middle": [
"M"
],
"last": "Spap\u00e9",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Kosunen",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Barral",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ravaja",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Jacucci",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kaski",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval",
"volume": "",
"issue": "",
"pages": "425--434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugster, M. J., Ruotsalo, T., Spap\u00e9, M. M., Kosunen, I., Barral, O., Ravaja, N., Jacucci, G., and Kaski, S. (2014). Predicting term-relevance from brain signals. In Proceedings of the 37th international ACM SIGIR con- ference on Research & development in information re- trieval, pages 425-434. ACM.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Racial bias in empathy: Do we process dark-and fair-colored hands in pain differently? An EEG study",
"authors": [
{
"first": "S",
"middle": [],
"last": "Fabi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Leuthold",
"suffix": ""
}
],
"year": 2018,
"venue": "Neuropsychologia",
"volume": "114",
"issue": "",
"pages": "143--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabi, S. and Leuthold, H. (2018). Racial bias in empathy: Do we process dark-and fair-colored hands in pain differ- ently? An EEG study. Neuropsychologia, 114:143-157.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Improving vector space word representations using multilingual correlation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "462--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faruqui, M. and Dyer, C. (2014). Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguis- tics, pages 462-471.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "New method for fMRI investigations of language: defining ROIs functionally in individual subjects",
"authors": [
{
"first": "E",
"middle": [],
"last": "Fedorenko",
"suffix": ""
},
{
"first": "P.-J",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nieto-Casta\u00f1\u00f3n",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Whitfield-Gabrieli",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kanwisher",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of neurophysiology",
"volume": "104",
"issue": "2",
"pages": "1177--1194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fedorenko, E., Hsieh, P.-J., Nieto-Casta\u00f1\u00f3n, A., Whitfield- Gabrieli, S., and Kanwisher, N. (2010). New method for fMRI investigations of language: defining ROIs func- tionally in individual subjects. Journal of neurophysiol- ogy, 104(2):1177-1194.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Reading time data for evaluating broad-coverage models of English sentence processing",
"authors": [
{
"first": "S",
"middle": [
"L"
],
"last": "Frank",
"suffix": ""
},
{
"first": "I",
"middle": [
"F"
],
"last": "Monsalve",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Thompson",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vigliocco",
"suffix": ""
}
],
"year": 2013,
"venue": "Behavior Research Methods",
"volume": "45",
"issue": "4",
"pages": "1182--1190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank, S. L., Monsalve, I. F., Thompson, R. L., and Vigliocco, G. (2013). Reading time data for evaluating broad-coverage models of English sentence processing. Behavior Research Methods, 45(4):1182-1190.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The ERP response to the amount of information conveyed by words in sentences",
"authors": [
{
"first": "S",
"middle": [
"L"
],
"last": "Frank",
"suffix": ""
},
{
"first": "L",
"middle": [
"J"
],
"last": "Otten",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Galli",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Vigliocco",
"suffix": ""
}
],
"year": 2015,
"venue": "Brain and language",
"volume": "140",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank, S. L., Otten, L. J., Galli, G., and Vigliocco, G. (2015). The ERP response to the amount of information conveyed by words in sentences. Brain and language, 140:1-11.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Which eye tracker is right for your research? Performance evaluation of several cost variant eye trackers",
"authors": [
{
"first": "G",
"middle": [],
"last": "Funke",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Greenlee",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Carter",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Dukes",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Menke",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Human Factors and Ergonomics Society Annual Meeting",
"volume": "60",
"issue": "",
"pages": "1240--1244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Funke, G., Greenlee, E., Carter, M., Dukes, A., Brown, R., and Menke, L. (2016). Which eye tracker is right for your research? Performance evaluation of several cost variant eye trackers. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol- ume 60, pages 1240-1244. SAGE Publications Sage CA: Los Angeles, CA.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The Natural Stories Corpus",
"authors": [
{
"first": "R",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "H",
"middle": [
"J"
],
"last": "Tily",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Blank",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Vishnevetsky",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Piantadosi",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fedorenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Futrell, R., Gibson, E., Tily, H. J., Blank, I., Vishnevetsky, A., Piantadosi, S., and Fedorenko, E. (2018). The Natu- ral Stories Corpus. In Proceedings of the Eleventh Inter- national Conference on Language Resources and Evalu- ation (LREC-2018).",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Interpretable semantic vectors from a joint model of brain-and text-based meaning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Fyshe",
"suffix": ""
},
{
"first": "P",
"middle": [
"P"
],
"last": "Talukdar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "T",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the conference. Association for Computational Linguistics. Meeting",
"volume": "2014",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fyshe, A., Talukdar, P. P., Murphy, B., and Mitchell, T. M. (2014). Interpretable semantic vectors from a joint model of brain-and text-based meaning. In Proceedings of the conference. Association for Computational Lin- guistics. Meeting, volume 2014, page 489. NIH Public Access.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Does the brain represent words? An evaluation of brain decoding studies of language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gauthier",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ivanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.00591"
]
},
"num": null,
"urls": [],
"raw_text": "Gauthier, J. and Ivanova, A. (2018). Does the brain represent words? An evaluation of brain decod- ing studies of language understanding. arXiv preprint arXiv:1806.00591.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Linking artificial and human neural representations of language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gauthier",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "529--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gauthier, J. and Levy, R. (2019). Linking artificial and hu- man neural representations of language. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP), pages 529-539.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Evaluation of the Tobii EyeX eye tracking controller and Matlab toolkit for research. Behavior research methods",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gibaldi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Vanegas",
"suffix": ""
},
{
"first": "P",
"middle": [
"J"
],
"last": "Bex",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Maiello",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "49",
"issue": "",
"pages": "923--946",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gibaldi, A., Vanegas, M., Bex, P. J., and Maiello, G. (2017). Evaluation of the Tobii EyeX eye tracking con- troller and Matlab toolkit for research. Behavior re- search methods, 49(3):923-946.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Using gaze to predict text readability",
"authors": [
{
"first": "A",
"middle": [
"V"
],
"last": "Gonz\u00e1lez-Gardu\u00f1o",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "438--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gonz\u00e1lez-Gardu\u00f1o, A. V. and S\u00f8gaard, A. (2017). Using gaze to predict text readability. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 438-443.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Finding syntax in human encephalography with beam search",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Brennan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2727--2736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hale, J., Dyer, C., Kuncoro, A., and Brennan, J. R. (2018). Finding syntax in human encephalography with beam search. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2727-2736.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "The revolution will not be controlled: Natural stimuli in speech neuroscience. Language",
"authors": [
{
"first": "L",
"middle": [
"S"
],
"last": "Hamilton",
"suffix": ""
},
{
"first": "A",
"middle": [
"G"
],
"last": "Huth",
"suffix": ""
}
],
"year": 2018,
"venue": "Cognition and Neuroscience",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamilton, L. S. and Huth, A. G. (2018). The revolution will not be controlled: Natural stimuli in speech neuro- science. Language, Cognition and Neuroscience, pages 1-10.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Effects of word length and frequency on the human event-related potential",
"authors": [
{
"first": "O",
"middle": [],
"last": "Hauk",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pulverm\u00fcller",
"suffix": ""
}
],
"year": 2004,
"venue": "Clinical Neurophysiology",
"volume": "115",
"issue": "5",
"pages": "1090--1103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hauk, O. and Pulverm\u00fcller, F. (2004). Effects of word length and frequency on the human event-related poten- tial. Clinical Neurophysiology, 115(5):1090-1103.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Co-registration of eye movements and event-related potentials in connected-text paragraph reading",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Henderson",
"suffix": ""
},
{
"first": "S",
"middle": [
"G"
],
"last": "Luke",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "J",
"middle": [
"E"
],
"last": "Richards",
"suffix": ""
}
],
"year": 2013,
"venue": "Frontiers in systems neuroscience",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henderson, J. M., Luke, S. G., Schmidt, J., and Richards, J. E. (2013). Co-registration of eye movements and event-related potentials in connected-text paragraph reading. Frontiers in systems neuroscience, 7:28.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Neural correlates of fixation duration in natural reading: Evidence from fixation-related fMRI. Neu-roImage",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Henderson",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "S",
"middle": [
"G"
],
"last": "Luke",
"suffix": ""
},
{
"first": "R",
"middle": [
"H"
],
"last": "Desai",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "119",
"issue": "",
"pages": "390--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henderson, J. M., Choi, W., Luke, S. G., and Desai, R. H. (2015). Neural correlates of fixation duration in natu- ral reading: Evidence from fixation-related fMRI. Neu- roImage, 119:390-397.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Language structure in the brain: A fixation-related fMRI study of syntactic surprisal in reading",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Henderson",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "M",
"middle": [
"W"
],
"last": "Lowder",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Ferreira",
"suffix": ""
}
],
"year": 2016,
"venue": "Neuroimage",
"volume": "132",
"issue": "",
"pages": "293--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henderson, J. M., Choi, W., Lowder, M. W., and Fer- reira, F. (2016). Language structure in the brain: A fixation-related fMRI study of syntactic surprisal in read- ing. Neuroimage, 132:293-300.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "The weirdest people in the world? Behavioral and brain sciences",
"authors": [
{
"first": "J",
"middle": [],
"last": "Henrich",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Heine",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Norenzayan",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "33",
"issue": "",
"pages": "61--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henrich, J., Heine, S. J., and Norenzayan, A. (2010). The weirdest people in the world? Behavioral and brain sci- ences, 33(2-3):61-83.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Sex differences and the own-gender bias in face recognition: a meta-analytic review",
"authors": [
{
"first": "A",
"middle": [],
"last": "Herlitz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lov\u00e9n",
"suffix": ""
}
],
"year": 2013,
"venue": "Visual Cognition",
"volume": "21",
"issue": "9",
"pages": "1306--1336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herlitz, A. and Lov\u00e9n, J. (2013). Sex differences and the own-gender bias in face recognition: a meta-analytic re- view. Visual Cognition, 21(9-10):1306-1336.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Entity recognition at first sight: Improving NER with eye movement information",
"authors": [
{
"first": "N",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hollenstein, N. and Zhang, C. (2019). Entity recognition at first sight: Improving NER with eye movement informa- tion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers).",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading",
"authors": [
{
"first": "N",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rotsztejn",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Troendle",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Pedroni",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Langer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hollenstein, N., Rotsztejn, J., Troendle, M., Pedroni, A., Zhang, C., and Langer, N. (2018). ZuCo, a simultane- ous EEG and eye-tracking resource for natural sentence reading. Scientific Data.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Advancing NLP with cognitive language processing signals",
"authors": [
{
"first": "N",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Troendle",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Bigiolli",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Langer",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXivpreprintarXiv:1904.02682"
]
},
"num": null,
"urls": [],
"raw_text": "Hollenstein, N., Barrett, M., Troendle, M., Bigiolli, F., Langer, N., and Zhang, C. (2019a). Advancing NLP with cognitive language processing signals. In arXiv preprint arXiv:1904.02682.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "CogniVal: A framework for cognitive word embedding evaluation",
"authors": [
{
"first": "N",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "De La Torre",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Langer",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hollenstein, N., de la Torre, A., Langer, N., and Zhang, C. (2019b). CogniVal: A framework for cognitive word embedding evaluation. In Proceedings of the 23nd Con- ference on Computational Natural Language Learning.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Zuco 2.0: A dataset of physiological recordings during natural reading and annotation",
"authors": [
{
"first": "N",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Troendle",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Langer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.00903"
]
},
"num": null,
"urls": [],
"raw_text": "Hollenstein, N., Troendle, M., Zhang, C., and Langer, N. (2019c). Zuco 2.0: A dataset of physiological recordings during natural reading and annotation. arXiv preprint arXiv:1912.00903.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "The social impact of natural language processing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "Spruit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "591--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hovy, D. and Spruit, S. L. (2016). The social impact of nat- ural language processing. In Proceedings of the 54th An- nual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 591-598.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Modeling task fmri data via deep convolutional autoencoder",
"authors": [
{
"first": "H",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Makkie",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE transactions on medical imaging",
"volume": "37",
"issue": "7",
"pages": "1551--1561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, H., Hu, X., Zhao, Y., Makkie, M., Dong, Q., Zhao, S., Guo, L., and Liu, T. (2017). Modeling task fmri data via deep convolutional autoencoder. IEEE transactions on medical imaging, 37(7):1551-1561.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Natural speech reveals the semantic maps that tile human cerebral cortex",
"authors": [
{
"first": "A",
"middle": [
"G"
],
"last": "Huth",
"suffix": ""
},
{
"first": "W",
"middle": [
"A"
],
"last": "De Heer",
"suffix": ""
},
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "F",
"middle": [
"E"
],
"last": "Theunissen",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Gallant",
"suffix": ""
}
],
"year": 2016,
"venue": "Nature",
"volume": "532",
"issue": "7600",
"pages": "453--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huth, A. G., de Heer, W. A., Griffiths, T. L., Theunissen, F. E., and Gallant, J. L. (2016). Natural speech reveals the semantic maps that tile human cerebral cortex. Na- ture, 532(7600):453-458.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "A theory of reading: From eye fixations to comprehension",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Just",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Carpenter",
"suffix": ""
}
],
"year": 1980,
"venue": "Psychological review",
"volume": "87",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Just, M. A. and Carpenter, P. A. (1980). A theory of read- ing: From eye fixations to comprehension. Psychologi- cal review, 87(4):329.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "From story comprehension to the neurobiology of language",
"authors": [
{
"first": "K",
"middle": [
"D"
],
"last": "Kandylaki",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Bornkessel-Schlesewsky",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kandylaki, K. D. and Bornkessel-Schlesewsky, I. (2019). From story comprehension to the neurobiology of lan- guage.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "At a glance: The impact of gaze aggregation views on syntactic tagging",
"authors": [
{
"first": "S",
"middle": [],
"last": "Klerke",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN)",
"volume": "",
"issue": "",
"pages": "51--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klerke, S. and Plank, B. (2019). At a glance: The impact of gaze aggregation views on syntactic tagging. In Pro- ceedings of the Beyond Vision and LANguage: inTEgrat- ing Real-world kNowledge (LANTERN), pages 51-61.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Improving sentence compression by learning to predict gaze",
"authors": [
{
"first": "S",
"middle": [],
"last": "Klerke",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1528--1533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klerke, S., Goldberg, Y., and S\u00f8gaard, A. (2016). Improv- ing sentence compression by learning to predict gaze. In Proceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 1528- 1533.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Substantiating reading teachers with scanpaths",
"authors": [
{
"first": "S",
"middle": [],
"last": "Klerke",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "Madsen",
"suffix": ""
},
{
"first": "E",
"middle": [
"J"
],
"last": "Jacobsen",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Hansen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications",
"volume": "",
"issue": "",
"pages": "1--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klerke, S., Madsen, J. A., Jacobsen, E. J., and Hansen, J. P. (2018). Substantiating reading teachers with scan- paths. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, pages 1-3.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Alpha-band oscillations, attention, and controlled access to stored information",
"authors": [
{
"first": "W",
"middle": [],
"last": "Klimesch",
"suffix": ""
}
],
"year": 2012,
"venue": "Trends in cognitive sciences",
"volume": "16",
"issue": "12",
"pages": "606--617",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klimesch, W. (2012). Alpha-band oscillations, attention, and controlled access to stored information. Trends in cognitive sciences, 16(12):606-617.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Information-based functional brain mapping",
"authors": [
{
"first": "N",
"middle": [],
"last": "Kriegeskorte",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Goebel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bandettini",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "103",
"issue": "10",
"pages": "3863--3868",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kriegeskorte, N., Goebel, R., and Bandettini, P. (2006). Information-based functional brain mapping. Proceedings of the National Academy of Sciences, 103(10):3863-3868.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Towards inferring language expertise using eye tracking",
"authors": [
{
"first": "K",
"middle": [],
"last": "Kunze",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Kawaichi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Yoshimura",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kise",
"suffix": ""
}
],
"year": 2013,
"venue": "CHI'13 Extended Abstracts on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "217--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kunze, K., Kawaichi, H., Yoshimura, K., and Kise, K. (2013). Towards inferring language expertise using eye tracking. In CHI'13 Extended Abstracts on Human Fac- tors in Computing Systems, pages 217-222. ACM.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Electrophysiology reveals semantic memory use in language comprehension",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kutas",
"suffix": ""
},
{
"first": "K",
"middle": [
"D"
],
"last": "Federmeier",
"suffix": ""
}
],
"year": 2000,
"venue": "Trends in cognitive sciences",
"volume": "4",
"issue": "12",
"pages": "463--470",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kutas, M. and Federmeier, K. D. (2000). Electrophysiol- ogy reveals semantic memory use in language compre- hension. Trends in cognitive sciences, 4(12):463-470.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "The evaluation of preprocessing choices in single-subject bold fmri using npairs performance metrics",
"authors": [
{
"first": "S",
"middle": [],
"last": "Laconte",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Muley",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ashe",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Frutiger",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Rehm",
"suffix": ""
},
{
"first": "L",
"middle": [
"K"
],
"last": "Hansen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Yacoub",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Rottenberg",
"suffix": ""
}
],
"year": 2003,
"venue": "NeuroImage",
"volume": "18",
"issue": "1",
"pages": "10--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LaConte, S., Anderson, J., Muley, S., Ashe, J., Frutiger, S., Rehm, K., Hansen, L. K., Yacoub, E., Hu, X., Rotten- berg, D., et al. (2003). The evaluation of preprocessing choices in single-subject bold fmri using npairs perfor- mance metrics. NeuroImage, 18(1):10-27.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Expectation-based syntactic comprehension",
"authors": [
{
"first": "R",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2008,
"venue": "Cognition",
"volume": "106",
"issue": "3",
"pages": "1126--1177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levy, R. (2008). Expectation-based syntactic comprehen- sion. Cognition, 106(3):1126-1177.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Emotion classification based on gamma-band EEG",
"authors": [
{
"first": "M",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "B.-L",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2009,
"venue": "EMBC 2009. Annual International Conference of the IEEE",
"volume": "",
"issue": "",
"pages": "1223--1226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, M. and Lu, B.-L. (2009). Emotion classification based on gamma-band EEG. In Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual Interna- tional Conference of the IEEE, pages 1223-1226. IEEE.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "The role of syntax during pronoun resolution: Evidence from fMRI",
"authors": [
{
"first": "J",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Fabre",
"suffix": ""
},
{
"first": "W.-M",
"middle": [],
"last": "Luh",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hale",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing",
"volume": "",
"issue": "",
"pages": "56--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, J., Fabre, M., Luh, W.-M., and Hale, J. (2018). The role of syntax during pronoun resolution: Evidence from fMRI. In Proceedings of the Eight Workshop on Cogni- tive Aspects of Computational Language Learning and Processing, pages 56-64.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "Improving attention model based on cognition grounded data for sentiment analysis",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "C.-R",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Transactions on Affective Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long, Y., Xiang, R., Lu, Q., Huang, C.-R., and Li, M. (2019). Improving attention model based on cognition grounded data for sentiment analysis. IEEE Transac- tions on Affective Computing.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Dependency parsing with your eyes: Dependency structure predicts eye regressions during reading",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lopopolo",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "Frank",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Van Den Bosch",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Willems",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "77--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lopopolo, A., Frank, S. L., van den Bosch, A., and Willems, R. (2019). Dependency parsing with your eyes: Dependency structure predicts eye regressions dur- ing reading. In Proceedings of the Workshop on Cogni- tive Modeling and Computational Linguistics, pages 77- 85.",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "Language separation in bidialectal speakers: Evidence from eye tracking",
"authors": [
{
"first": "B",
"middle": [],
"last": "Lundquist",
"suffix": ""
},
{
"first": "\u00d8",
"middle": [
"A"
],
"last": "Vangsnes",
"suffix": ""
}
],
"year": 2018,
"venue": "Frontiers in psychology",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lundquist, B. and Vangsnes, \u00d8. A. (2018). Language sep- aration in bidialectal speakers: Evidence from eye track- ing. Frontiers in psychology, 9:1394.",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "Shared and separate systems in bilingual language processing: Converging evidence from eyetracking and brain imaging",
"authors": [
{
"first": "V",
"middle": [],
"last": "Marian",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Spivey",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hirsch",
"suffix": ""
}
],
"year": 2003,
"venue": "Brain and language",
"volume": "86",
"issue": "1",
"pages": "70--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marian, V., Spivey, M., and Hirsch, J. (2003). Shared and separate systems in bilingual language processing: Con- verging evidence from eyetracking and brain imaging. Brain and language, 86(1):70-82.",
"links": null
},
"BIBREF80": {
"ref_id": "b80",
"title": "With blinkers on: Robust prediction of eye movements across readers",
"authors": [
{
"first": "F",
"middle": [],
"last": "Matthies",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "803--807",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthies, F. and S\u00f8gaard, A. (2013). With blinkers on: Ro- bust prediction of eye movements across readers. Pro- ceedings of the 2013 Conference on empirical methods in natural language processing (EMNLP), pages 803-807.",
"links": null
},
"BIBREF81": {
"ref_id": "b81",
"title": "Learning neural representations of human cognition across many fMRI studies",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mensch",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mairal",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bzdok",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5883--5893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mensch, A., Mairal, J., Bzdok, D., Thirion, B., and Varo- quaux, G. (2017). Learning neural representations of human cognition across many fMRI studies. In Ad- vances in Neural Information Processing Systems, pages 5883-5893.",
"links": null
},
"BIBREF82": {
"ref_id": "b82",
"title": "Total variation regularization for fmri-based prediction of behavior",
"authors": [
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE transactions on medical imaging",
"volume": "30",
"issue": "7",
"pages": "1328--1340",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel, V., Gramfort, A., Varoquaux, G., Eger, E., and Thirion, B. (2011). Total variation regularization for fmri-based prediction of behavior. IEEE transactions on medical imaging, 30(7):1328-1340.",
"links": null
},
"BIBREF83": {
"ref_id": "b83",
"title": "Characterizing the hemodynamic response: effects of presentation rate, sampling procedure, and the possibility of ordering brain activity based on relative timing",
"authors": [
{
"first": "F",
"middle": [
"M"
],
"last": "Miezin",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Maccotta",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ollinger",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Petersen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Buckner",
"suffix": ""
}
],
"year": 2000,
"venue": "Neuroimage",
"volume": "11",
"issue": "6",
"pages": "735--759",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miezin, F. M., Maccotta, L., Ollinger, J., Petersen, S., and Buckner, R. (2000). Characterizing the hemodynamic response: effects of presentation rate, sampling proce- dure, and the possibility of ordering brain activity based on relative timing. Neuroimage, 11(6):735-759.",
"links": null
},
"BIBREF84": {
"ref_id": "b84",
"title": "From brain space to distributional space: the perilous journeys of fMRI decoding",
"authors": [
{
"first": "G",
"middle": [],
"last": "Minnema",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Herbelot",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "155--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minnema, G. and Herbelot, A. (2019). From brain space to distributional space: the perilous journeys of fMRI de- coding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 155-161.",
"links": null
},
"BIBREF85": {
"ref_id": "b85",
"title": "Harnessing cognitive features for sarcasm detection",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kanojia",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nagar",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1095--1104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mishra, A., Kanojia, D., Nagar, S., Dey, K., and Bhat- tacharyya, P. (2016). Harnessing cognitive features for sarcasm detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 1095-1104.",
"links": null
},
"BIBREF86": {
"ref_id": "b86",
"title": "Learning cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural network",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "377--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mishra, A., Dey, K., and Bhattacharyya, P. (2017a). Learn- ing cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural net- work. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 377-387. ACL.",
"links": null
},
"BIBREF87": {
"ref_id": "b87",
"title": "Leveraging cognitive features for sentiment analysis",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kanojia",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nagar",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of The 20th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "156--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mishra, A., Kanojia, D., Nagar, S., Dey, K., and Bhat- tacharyya, P. (2017b). Leveraging cognitive features for sentiment analysis. Proceedings of The 20th Conference on Computational Natural Language Learning, pages 156-166.",
"links": null
},
"BIBREF88": {
"ref_id": "b88",
"title": "Predicting human brain activity associated with the meanings of nouns",
"authors": [
{
"first": "T",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "S",
"middle": [
"V"
],
"last": "Shinkareva",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "K.-M",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "V",
"middle": [
"L"
],
"last": "Malave",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Mason",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Just",
"suffix": ""
}
],
"year": 2008,
"venue": "Science",
"volume": "320",
"issue": "5880",
"pages": "1191--1195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell, T. M., Shinkareva, S. V., Carlson, A., Chang, K.-M., Malave, V. L., Mason, R. A., and Just, M. A. (2008). Predicting human brain activity associated with the meanings of nouns. Science, 320(5880):1191-1195.",
"links": null
},
"BIBREF89": {
"ref_id": "b89",
"title": "Simultaneous EEG and fMRI: towards the characterization of structure and dynamics of brain networks",
"authors": [
{
"first": "C",
"middle": [],
"last": "Mulert",
"suffix": ""
}
],
"year": 2013,
"venue": "Dialogues in clinical neuroscience",
"volume": "15",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mulert, C. (2013). Simultaneous EEG and fMRI: to- wards the characterization of structure and dynamics of brain networks. Dialogues in clinical neuroscience, 15(3):381.",
"links": null
},
"BIBREF90": {
"ref_id": "b90",
"title": "Detecting semantic category in simultaneous EEG/MEG recordings",
"authors": [
{
"first": "B",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 first workshop on computational neurolinguistics",
"volume": "",
"issue": "",
"pages": "36--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murphy, B. and Poesio, M. (2010). Detecting semantic category in simultaneous EEG/MEG recordings. In Pro- ceedings of the NAACL HLT 2010 first workshop on computational neurolinguistics, pages 36-44. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF91": {
"ref_id": "b91",
"title": "WebGazer: Scalable webcam eye tracking using user interactions",
"authors": [
{
"first": "A",
"middle": [],
"last": "Papoutsaki",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Sangkloy",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Laskey",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Daskalova",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hays",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence-IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Papoutsaki, A., Sangkloy, P., Laskey, J., Daskalova, N., Huang, J., and Hays, J. (2016). WebGazer: Scalable we- bcam eye tracking using user interactions. In Proceed- ings of the Twenty-Fifth International Joint Conference on Artificial Intelligence-IJCAI 2016.",
"links": null
},
"BIBREF92": {
"ref_id": "b92",
"title": "Jointly predicting arousal, valence and dominance with multi-task learning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Parthasarathy",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Busso",
"suffix": ""
}
],
"year": 2017,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "1103--1107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parthasarathy, S. and Busso, C. (2017). Jointly predicting arousal, valence and dominance with multi-task learning. In Interspeech, pages 1103-1107.",
"links": null
},
"BIBREF93": {
"ref_id": "b93",
"title": "Automagic: Standardized preprocessing of big EEG data",
"authors": [
{
"first": "A",
"middle": [],
"last": "Pedroni",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bahreini",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Langer",
"suffix": ""
}
],
"year": 2019,
"venue": "NeuroImage",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedroni, A., Bahreini, A., and Langer, N. (2019). Au- tomagic: Standardized preprocessing of big EEG data. NeuroImage.",
"links": null
},
"BIBREF94": {
"ref_id": "b94",
"title": "Brain processing of native and foreign languages",
"authors": [
{
"first": "D",
"middle": [],
"last": "Perani",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dehaene",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Grassi",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "S",
"middle": [
"F"
],
"last": "Cappa",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Fazio",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mehler",
"suffix": ""
}
],
"year": 1996,
"venue": "NeuroReport-International Journal for Rapid Communications of Research in Neuroscience",
"volume": "7",
"issue": "15",
"pages": "2439--2444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Perani, D., Dehaene, S., Grassi, F., Cohen, L., Cappa, S. F., Dupoux, E., Fazio, F., and Mehler, J. (1996). Brain pro- cessing of native and foreign languages. NeuroReport- International Journal for Rapid Communications of Re- search in Neuroscience, 7(15):2439-2444.",
"links": null
},
"BIBREF95": {
"ref_id": "b95",
"title": "Toward a universal decoder of linguistic meaning from brain activation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lou",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Pritchett",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Gershman",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kanwisher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Botvinick",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fedorenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Nature communications",
"volume": "9",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pereira, F., Lou, B., Pritchett, B., Ritter, S., Gershman, S. J., Kanwisher, N., Botvinick, M., and Fedorenko, E. (2018). Toward a universal decoder of linguistic meaning from brain activation. Nature communications, 9(1):963.",
"links": null
},
"BIBREF96": {
"ref_id": "b96",
"title": "Eye movements in reading and information processing: 20 years of research",
"authors": [
{
"first": "K",
"middle": [],
"last": "Rayner",
"suffix": ""
}
],
"year": 1998,
"venue": "Psychological bulletin",
"volume": "124",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rayner, K. (1998). Eye movements in reading and infor- mation processing: 20 years of research. Psychological bulletin, 124(3):372.",
"links": null
},
"BIBREF97": {
"ref_id": "b97",
"title": "Predicting brain activation with WordNet embeddings",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Branco",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Saedi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Branco",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rodrigues, J. A., Branco, R., Silva, J., Saedi, C., and Branco, A. (2018). Predicting brain activation with WordNet embeddings. In Proceedings of the Eight Workshop on Cognitive Aspects of Computational Lan- guage Learning and Processing, pages 1-5.",
"links": null
},
"BIBREF98": {
"ref_id": "b98",
"title": "Using gaze data to predict multiword expressions",
"authors": [
{
"first": "O",
"middle": [],
"last": "Rohanian",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Taslimipoor",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Yaneva",
"suffix": ""
},
{
"first": "L",
"middle": [
"A"
],
"last": "Ha",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "601--609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohanian, O., Taslimipoor, S., Yaneva, V., and Ha, L. A. (2017). Using gaze data to predict multiword expres- sions. In Proceedings of the International Conference Recent Advances in Natural Language Processing, pages 601-609.",
"links": null
},
"BIBREF99": {
"ref_id": "b99",
"title": "A deep autoencoder for near-perfect fMRI encoding",
"authors": [
{
"first": "V",
"middle": [],
"last": "Rowtula",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Oota",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "B",
"middle": [
"R"
],
"last": "Surampudi",
"suffix": ""
}
],
"year": 2018,
"venue": "Workshop on Modeling and Decision-Making in the Spatiotemporal Domain, 32nd Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rowtula, V., Oota, S., Gupta, M., and Surampudi, B. R. (2018). A deep autoencoder for near-perfect fMRI en- coding. In Workshop on Modeling and Decision-Making in the Spatiotemporal Domain, 32nd Conference on Neu- ral Information Processing Systems (NeurIPS 2018).",
"links": null
},
"BIBREF100": {
"ref_id": "b100",
"title": "Inducing brain-relevant bias in natural language processing models",
"authors": [
{
"first": "D",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Toneva",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Wehbe",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "14100--14110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schwartz, D., Toneva, M., and Wehbe, L. (2019). Inducing brain-relevant bias in natural language processing mod- els. In Advances in Neural Information Processing Sys- tems, pages 14100-14110.",
"links": null
},
"BIBREF101": {
"ref_id": "b101",
"title": "fMRI reveals language-specific predictive coding during naturalistic sentence comprehension",
"authors": [
{
"first": "C",
"middle": [],
"last": "Shain",
"suffix": ""
},
{
"first": "I",
"middle": [
"A"
],
"last": "Blank",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Van Schijndel",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Schuler",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fedorenko",
"suffix": ""
}
],
"year": 2019,
"venue": "Neuropsychologia",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shain, C., Blank, I. A., van Schijndel, M., Schuler, W., and Fedorenko, E. (2019). fMRI reveals language-specific predictive coding during naturalistic sentence compre- hension. Neuropsychologia, page 107307.",
"links": null
},
"BIBREF102": {
"ref_id": "b102",
"title": "Quantifying sentence complexity based on eye-tracking measures",
"authors": [
{
"first": "A",
"middle": [
"D"
],
"last": "Singh",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Husain",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rajakrishnan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the workshop on computational linguistics for linguistic complexity (cl4lc)",
"volume": "",
"issue": "",
"pages": "202--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Singh, A. D., Mehta, P., Husain, S., and Rajakrishnan, R. (2016). Quantifying sentence complexity based on eye-tracking measures. In Proceedings of the workshop on computational linguistics for linguistic complexity (cl4lc), pages 202-212.",
"links": null
},
"BIBREF103": {
"ref_id": "b103",
"title": "Evaluating word embeddings with fMRI and eye-tracking",
"authors": [
{
"first": "A",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "116--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f8gaard, A. (2016). Evaluating word embeddings with fMRI and eye-tracking. In Proceedings of the 1st Work- shop on Evaluating Vector-Space Representations for NLP, pages 116-121.",
"links": null
},
"BIBREF104": {
"ref_id": "b104",
"title": "Next Wednesday's meeting has been moved forward two days\": The timeperspective question is ambiguous in Swiss German, but not in Standard German",
"authors": [
{
"first": "K",
"middle": [],
"last": "Stocker",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hartmann",
"suffix": ""
}
],
"year": 2019,
"venue": "Swiss Journal of Psychology",
"volume": "78",
"issue": "1-2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stocker, K. and Hartmann, M. (2019). \"Next Wednesday's meeting has been moved forward two days\": The time- perspective question is ambiguous in Swiss German, but not in Standard German. Swiss Journal of Psychology, 78(1-2):61.",
"links": null
},
"BIBREF105": {
"ref_id": "b105",
"title": "Towards making a dependency parser see",
"authors": [
{
"first": "M",
"middle": [],
"last": "Strzyz",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Vilares",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1500--1506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strzyz, M., Vilares, D., and G\u00f3mez-Rodr\u00edguez, C. (2019). Towards making a dependency parser see. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1500-1506.",
"links": null
},
"BIBREF106": {
"ref_id": "b106",
"title": "A short review of ethical challenges in clinical natural language processing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Suster",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tulkens",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "80--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suster, S., Tulkens, S., and Daelemans, W. (2017). A short review of ethical challenges in clinical natural language processing. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 80-87.",
"links": null
},
"BIBREF107": {
"ref_id": "b107",
"title": "Enhancing image captioning with eye-tracking",
"authors": [
{
"first": "E",
"middle": [],
"last": "Takmaz",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Beinborn",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pezzelle",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takmaz, E., Beinborn, L., Pezzelle, S., and Fern\u00e1ndez, R. (2019). Enhancing image captioning with eye-tracking. In EurNLP.",
"links": null
},
"BIBREF108": {
"ref_id": "b108",
"title": "Testing the generalization power of neural network models across NLI benchmarks",
"authors": [
{
"first": "A",
"middle": [],
"last": "Talman",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chatzikyriakidis",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "85--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Talman, A. and Chatzikyriakidis, S. (2019). Testing the generalization power of neural network models across NLI benchmarks. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 85-94.",
"links": null
},
"BIBREF109": {
"ref_id": "b109",
"title": "Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)",
"authors": [
{
"first": "M",
"middle": [],
"last": "Toneva",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Wehbe",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "14928--14938",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toneva, M. and Wehbe, L. (2019). Interpreting and im- proving natural-language processing (in machines) with natural language-processing (in the brain). In Advances in Neural Information Processing Systems, pages 14928- 14938.",
"links": null
},
"BIBREF110": {
"ref_id": "b110",
"title": "The statistical significance filter leads to overoptimistic expectations of replicability",
"authors": [
{
"first": "S",
"middle": [],
"last": "Vasishth",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mertzen",
"suffix": ""
},
{
"first": "L",
"middle": [
"A"
],
"last": "J\u00e4ger",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gelman",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Memory and Language",
"volume": "103",
"issue": "",
"pages": "151--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasishth, S., Mertzen, D., J\u00e4ger, L. A., and Gelman, A. (2018). The statistical significance filter leads to overop- timistic expectations of replicability. Journal of Memory and Language, 103:151-175.",
"links": null
},
"BIBREF111": {
"ref_id": "b111",
"title": "Mapping between fMRI responses to movies and their natural language annotations",
"authors": [
{
"first": "K",
"middle": [],
"last": "Vodrahalli",
"suffix": ""
},
{
"first": "P.-H",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Baldassano",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Yong",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Honey",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Hasson",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ramadge",
"suffix": ""
},
{
"first": "K",
"middle": [
"A"
],
"last": "Norman",
"suffix": ""
}
],
"year": 2018,
"venue": "Neuroimage",
"volume": "180",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vodrahalli, K., Chen, P.-H., Liang, Y., Baldassano, C., Chen, J., Yong, E., Honey, C., Hasson, U., Ramadge, P., Norman, K. A., et al. (2018). Mapping between fMRI responses to movies and their natural language annota- tions. Neuroimage, 180:223-231.",
"links": null
},
"BIBREF112": {
"ref_id": "b112",
"title": "What is the scanpath signature of syntactic reanalysis?",
"authors": [
{
"first": "T",
"middle": [],
"last": "Von Der Malsburg",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vasishth",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Memory and Language",
"volume": "65",
"issue": "2",
"pages": "109--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Von der Malsburg, T. and Vasishth, S. (2011). What is the scanpath signature of syntactic reanalysis? Journal of Memory and Language, 65(2):109-127.",
"links": null
},
"BIBREF113": {
"ref_id": "b113",
"title": "Power-law fluctuations in eye movements predict text comprehension during connected text reading",
"authors": [
{
"first": "S",
"middle": [],
"last": "Wallot",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "O'brien",
"suffix": ""
},
{
"first": "C",
"middle": [
"A"
],
"last": "Coey",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kelty-Stephen",
"suffix": ""
}
],
"year": 2015,
"venue": "CogSci",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wallot, S., O'Brien, B., Coey, C. A., and Kelty-Stephen, D. (2015). Power-law fluctuations in eye movements pre- dict text comprehension during connected text reading. In CogSci.",
"links": null
},
"BIBREF114": {
"ref_id": "b114",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "A",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. (2019). Glue: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representa- tions, ICLR 2019.",
"links": null
},
"BIBREF115": {
"ref_id": "b115",
"title": "Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses",
"authors": [
{
"first": "L",
"middle": [],
"last": "Wehbe",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Fyshe",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ramdas",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "PloS one",
"volume": "9",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wehbe, L., Murphy, B., Talukdar, P., Fyshe, A., Ramdas, A., and Mitchell, T. (2014a). Simultaneously uncov- ering the patterns of brain regions involved in different story reading subprocesses. PloS one, 9(11):e112575.",
"links": null
},
"BIBREF116": {
"ref_id": "b116",
"title": "Aligning context-based statistical models of language with brain activity during reading",
"authors": [
{
"first": "L",
"middle": [],
"last": "Wehbe",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "233--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wehbe, L., Vaswani, A., Knight, K., and Mitchell, T. (2014b). Aligning context-based statistical models of language with brain activity during reading. In Pro- ceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 233- 243.",
"links": null
},
"BIBREF117": {
"ref_id": "b117",
"title": "Neuroimaging data processing -Wikibooks, the free textbook project",
"authors": [
{
"first": "",
"middle": [],
"last": "Wikibooks",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wikibooks. (2020). Neuroimaging data processing - Wikibooks, the free textbook project. [Online; accessed 21-February-2020].",
"links": null
},
"BIBREF118": {
"ref_id": "b118",
"title": "Thinking theta and alpha: Mechanisms of intuitive and analytical reasoning",
"authors": [
{
"first": "C",
"middle": [
"C"
],
"last": "Williams",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kappen",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Hassall",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "O",
"middle": [
"E"
],
"last": "Krigolson",
"suffix": ""
}
],
"year": 2019,
"venue": "Neu-roImage",
"volume": "189",
"issue": "",
"pages": "574--580",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Williams, C. C., Kappen, M., Hassall, C. D., Wright, B., and Krigolson, O. E. (2019). Thinking theta and alpha: Mechanisms of intuitive and analytical reasoning. Neu- roImage, 189:574-580.",
"links": null
},
"BIBREF119": {
"ref_id": "b119",
"title": "Through the eyes of the own-race bias: Eye-tracking and pupillometry during face recognition",
"authors": [
{
"first": "E",
"middle": [
"X W"
],
"last": "Wu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Laeng",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Magnussen",
"suffix": ""
}
],
"year": 2012,
"venue": "Social neuroscience",
"volume": "7",
"issue": "2",
"pages": "202--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, E. X. W., Laeng, B., and Magnussen, S. (2012). Through the eyes of the own-race bias: Eye-tracking and pupillometry during face recognition. Social neuro- science, 7(2):202-216.",
"links": null
},
"BIBREF120": {
"ref_id": "b120",
"title": "User-oriented document summarization through vision-based eye-tracking",
"authors": [
{
"first": "S",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Lau",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 14th international conference on Intelligent user interfaces",
"volume": "",
"issue": "",
"pages": "7--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu, S., Jiang, H., and Lau, F. (2009). User-oriented docu- ment summarization through vision-based eye-tracking. In Proceedings of the 14th international conference on Intelligent user interfaces, pages 7-16. ACM.",
"links": null
},
"BIBREF121": {
"ref_id": "b121",
"title": "Classifying referential and non-referential it using gaze",
"authors": [
{
"first": "V",
"middle": [],
"last": "Yaneva",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4896--4901",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaneva, V., Evans, R., Mitkov, R., et al. (2018). Classify- ing referential and non-referential it using gaze. In Pro- ceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4896-4901.",
"links": null
},
"BIBREF122": {
"ref_id": "b122",
"title": "Converting your thoughts to texts: Enabling brain typing via deep feature learning of eeg signals",
"authors": [
{
"first": "X",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Q",
"middle": [
"Z"
],
"last": "Sheng",
"suffix": ""
},
{
"first": "S",
"middle": [
"S"
],
"last": "Kanhere",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE international conference on pervasive computing and communications (PerCom)",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, X., Yao, L., Sheng, Q. Z., Kanhere, S. S., Gu, T., and Zhang, D. (2018). Converting your thoughts to texts: Enabling brain typing via deep feature learning of eeg signals. In 2018 IEEE international conference on pervasive computing and communications (PerCom), pages 1-10. IEEE.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "enrich a model for PoS induction with fMRI signals, Li et al. (2018) perform pronoun resolution, and Vodrahalli et al. (2018) classify movie scene annotations. Recently, Toneva and Wehbe"
},
"TABREF0": {
"text": "Overview of NLP tasks where eye movements showed improvements along with the earliest reference.",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}