ACL-OCL / Base_JSON /prefixK /json /K18 /K18-1030.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K18-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:09:36.957270Z"
},
"title": "Sequence classification with human attention",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Barrett",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"country": "Denmark"
}
},
"email": "barrett@hum.ku.dk"
},
{
"first": "Joachim",
"middle": [],
"last": "Bingel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"country": "Denmark"
}
},
"email": "bingel@di.ku.dk"
},
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ETH Zurich",
"location": {
"country": "Switzerland"
}
},
"email": "noraho@ethz.ch"
},
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "United Kingdom"
}
},
"email": "marek.rei@cl.cam.ac.uk"
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"country": "Denmark"
}
},
"email": "soegaard@di.ku.dk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Learning attention functions requires large volumes of data, but many NLP tasks simulate human behavior, and in this paper, we show that human attention really does provide a good inductive bias on many attention functions in NLP. Specifically, we use estimated human attention derived from eyetracking corpora to regularize attention functions in recurrent neural networks. We show substantial improvements across a range of tasks, including sentiment analysis, grammatical error detection, and detection of abusive language.",
"pdf_parse": {
"paper_id": "K18-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "Learning attention functions requires large volumes of data, but many NLP tasks simulate human behavior, and in this paper, we show that human attention really does provide a good inductive bias on many attention functions in NLP. Specifically, we use estimated human attention derived from eyetracking corpora to regularize attention functions in recurrent neural networks. We show substantial improvements across a range of tasks, including sentiment analysis, grammatical error detection, and detection of abusive language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When humans read a text, they do not attend to all its words (Carpenter and Just, 1983; Rayner and Duffy, 1988) . For example, humans are likely to omit many function words and other words that are predictable in context and focus on less predictable content words. Moreover, when they fixate on a word, the duration of that fixation depends on a number of linguistic factors (Clifton et al., 2007; Demberg and Keller, 2008) .",
"cite_spans": [
{
"start": 61,
"end": 87,
"text": "(Carpenter and Just, 1983;",
"ref_id": "BIBREF9"
},
{
"start": 88,
"end": 111,
"text": "Rayner and Duffy, 1988)",
"ref_id": "BIBREF43"
},
{
"start": 376,
"end": 398,
"text": "(Clifton et al., 2007;",
"ref_id": "BIBREF10"
},
{
"start": 399,
"end": 424,
"text": "Demberg and Keller, 2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since learning good attention functions for recurrent neural networks requires large volumes of data (Zoph et al., 2016; Britz et al., 2017) , and errors in attention are known to propagate to classification decisions (Alkhouli et al., 2016) , we explore the idea of using human attention, as estimated from eye-tracking corpora, as an inductive bias on such attention functions. Penalizing attention functions for departing from human attention may enable us to learn better attention functions when data is limited.",
"cite_spans": [
{
"start": 101,
"end": 120,
"text": "(Zoph et al., 2016;",
"ref_id": "BIBREF62"
},
{
"start": 121,
"end": 140,
"text": "Britz et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 218,
"end": 241,
"text": "(Alkhouli et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Eye-trackers provide millisecond-accurate records on where humans look when they are reading, and they are becoming cheaper and more easily available by the day (San Agustin et al., 2009) . In this paper, we use publicly available eye-tracking corpora, i.e., texts augmented with eye-tracking measures such as fixation duration times, and large eye-tracking corpora have appeared increasingly over the past years. Some studies suggest that the relevance of text can be inferred from the gaze pattern of the reader (Saloj\u00e4rvi et al., 2003 ) -even on word-level (Loboda et al., 2011) .",
"cite_spans": [
{
"start": 161,
"end": 187,
"text": "(San Agustin et al., 2009)",
"ref_id": "BIBREF51"
},
{
"start": 514,
"end": 537,
"text": "(Saloj\u00e4rvi et al., 2003",
"ref_id": "BIBREF50"
},
{
"start": 560,
"end": 581,
"text": "(Loboda et al., 2011)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions We present a recurrent neural architecture with attention for sequence classification tasks. The architecture jointly learns its parameters and an attention function, but can alternate between supervision signals from labeled sequences (with no explicit supervision of the attention function) and from attention trajectories. This enables us to use per-word fixation durations from eye-tracking corpora to regularize attention functions for sequence classification tasks. We show such regularization leads to significant improvements across a range of tasks, including sentiment analysis, detection of abusive language, and grammatical error detection. Our implementation is made available at https://github.com/coastalcph/ Sequence_classification_with_ human_attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present a recurrent neural architecture that jointly learns the recurrent parameters and the attention function, but can alternate between supervision signals from labeled sequences and from attention trajectories in eye-tracking corpora. The input will be a set of labeled sequences (sentences paired with discrete category labels) and a set of sequences, in which each token is associated with a scalar value representing the attention human readers devoted to this token on average. The two input datasets, i.e., the target task train-ing data of sentences paired with discrete categories, and the eye-tracking corpus, need not (and will not in our experiments) overlap in any way. Our experimental protocol, in other words, does not require in-task eye-tracking recordings, but simply leverages information from existing, available corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "Behind our approach lies the simple observation that we can correlate the token-level attention devoted by a recurrent neural network, even if trained on sentence-level signals, with any measure defined at the token level. In other words, we can compare the attention devoted by a recurrent neural network to various measures, including token-level annotation (Rei and S\u00f8gaard, 2018) and eye-tracking measures. The latter is particularly interesting as it is typically considered a measurement of human attention.",
"cite_spans": [
{
"start": 360,
"end": 383,
"text": "(Rei and S\u00f8gaard, 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "We go beyond this: Not only can we compare machine attention with human attention, we can also constrain or inform machine attention by human attention in various ways. In this paper, we explore this idea, proposing a particular architecture and training method that, in effect, uses human attention to regularize machine attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "Our training method is similar to a standard approach to training multi-task architectures (Dong et al., 2015; Bingel and S\u00f8gaard, 2017) , sometimes referred to as the alternating training approach (Luong et al., 2016) : We randomly select a data point from our training data or the eye-tracking corpus with some (potentially equal) probability. If the data point is sampled from our training data, we predict a discrete category and use the computed loss to update our parameters. If the data point is sampled from the eye-tracking corpus, we still run the recurrent network to produce a category, but this time we only monitor the attention weights assigned to the input tokens. We then compute the minimum squared error between the normalized eye-tracking measure and the normalized attention score. In other words, in multi-task learning, we optimize each task for a fixed number of parameter updates (or mini-batches) before switching to the next task (Dong et al., 2015) ; in our case, we optimize for a target task (for a fixed number of updates), then improve our attention function based on human attention (for a fixed number of updates), then return to optimizing for the target task and continue iterating.",
"cite_spans": [
{
"start": 91,
"end": 110,
"text": "(Dong et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 111,
"end": 136,
"text": "Bingel and S\u00f8gaard, 2017)",
"ref_id": "BIBREF6"
},
{
"start": 198,
"end": 218,
"text": "(Luong et al., 2016)",
"ref_id": "BIBREF33"
},
{
"start": 957,
"end": 976,
"text": "(Dong et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "Our architecture is a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) that encodes word representations x i into forward and backward representations, and into combined hidden states h i (of slightly lower dimensionality) at every timestep. In fact, our model is a hierarchical model whose word representations are concatenations of the output of character-level LSTMs and word embeddings, following Plank et al. 2016, but we ignore the character-level part of our architecture in the equations below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212 \u2192 h i = LST M (x i , \u2212 \u2212 \u2192 h i\u22121 ) (1) \u2190 \u2212 h i = LST M (x i , \u2190 \u2212 \u2212 h i+1 )",
"eq_num": "(2)"
}
],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = [ \u2212 \u2192 h i ; \u2190 \u2212 h i ]",
"eq_num": "(3)"
}
],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = tanh(W h h i + b h )",
"eq_num": "(4)"
}
],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "The final (reduced) hidden state is sometimes used as a sentence representation s, but we instead use attention to compute s by multiplying dynamically predicted attention weights with the hidden states for each time step. The final sentence predictions y are then computed by passing s through two more hidden layers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s = i a i h i (5) y = \u03c3(W y tanh(W\u1ef9s + b\u1ef9) + b y )",
"eq_num": "(6)"
}
],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "From the hidden states, we directly predict tokenlevel raw attention scores a i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e i = tanh(W e h i + b e )",
"eq_num": "(7)"
}
],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a i = W a e i + b a",
"eq_num": "(8)"
}
],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "We normalize these predictions to attention weights a i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a i = a i k a k",
"eq_num": "(9)"
}
],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "Our model thus combines two distinct objectives: one at the sentence level and one at the token level. The sentence-level objective is to minimize the squared error between output activations and true sentence labels y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L sent = j (y (j) \u2212 y (j) ) 2",
"eq_num": "(10)"
}
],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "The token-level objective, similarly, is to minimize the squared error for the attention not aligning with our human attention metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "L tok = j t (a (j)(t) \u2212 a (j)(t) ) 2 (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "These are finally combined to a weighted sum, using \u03bb (between 0 and 1) to trade off loss functions at the sentence and token levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = L sent + \u03bbL tok",
"eq_num": "(12)"
}
],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "Note again that our architecture does not require the target task data to come with eye-tracking information. We instead learn jointly to predict sentence categories and to attend to the tokens humans tend to focus on for longer. This requires a training schedule that determines when to optimize for the sentence-level classification objective, and when to optimize the machine attention at the token level. We therefore define an epoch to comprise a fixed number of batches, and sample every batch of training examples either from the target task data or from the eye-tracking corpus, as determined by a coin flip, the bias of which is tuned as a hyperparameter. Specifically, we define an epoch to consist of n batches, where n is the number of training sentences in the target task data divided by the batch size. This coin is potentially weighted with data being drawn from the auxiliary task with some probability or a decreasing probability of 1 E+1 , where E is the current epoch; see Section 4 for hyper-parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.1"
},
{
"text": "As mentioned in the above, our architecture requires no overlap between the eye-tracking corpus and the training data for the target task. We therefore rely on publicly available eye-tracking corpora. For sentiment analysis, grammatical error detection, and hate speech detection, we use publicly available research datasets that have been used previously in the literature. All datasets were lower-cased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "For our experiments, we concatenate two publicly available eye-tracking corpora, the Dundee Corpus (Kennedy et al., 2003) and the reading parts of the ZuCo Corpus (Hollenstein et al., 2018) , described below. Both corpora contain eye-tracking measurements from several subjects reading the same text. For every token, we compute the mean duration of all fixations to this token as our measure of human attention, following previous work (Barrett et al., 2016a; Gonzalez-Garduno and S\u00f8gaard, 2018) .",
"cite_spans": [
{
"start": 99,
"end": 121,
"text": "(Kennedy et al., 2003)",
"ref_id": "BIBREF23"
},
{
"start": 163,
"end": 189,
"text": "(Hollenstein et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 437,
"end": 460,
"text": "(Barrett et al., 2016a;",
"ref_id": "BIBREF1"
},
{
"start": 461,
"end": 496,
"text": "Gonzalez-Garduno and S\u00f8gaard, 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Eye-tracking corpora",
"sec_num": "3.1"
},
{
"text": "Dundee The English part of the Dundee corpus (Kennedy et al., 2003) comprises 2,368 sentences and more than 50,000 tokens. The texts were read by ten skilled, adult, native speakers. The texts are 20 newspaper articles from The Independent. The reading was self-paced and as close to natural, contextualized reading as possible for a laboratory data collection. The apparatus was a Dr Bouis Oculometer Eyetracker with a 1000 Hz monocular (right) sampling. At most five lines were shown per screen while subjects were reading.",
"cite_spans": [
{
"start": 45,
"end": 67,
"text": "(Kennedy et al., 2003)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Eye-tracking corpora",
"sec_num": "3.1"
},
{
"text": "ZuCo The ZuCo corpus (Hollenstein et al., 2018 ) is a combined eye-tracking and EEG dataset. It contains approximately 1,000 individual English sentences read by 12 adult, native speakers. Eye movements were recorded with the infrared video-based eye tracker EyeLink 1000 Plus at a sampling rate of 500 Hz. The sentences were presented at the same position on the screen, one at a time. Longer sentences spanned multiple lines. The subjects used a control pad to switch to the next sentence and to answer the control questions, which allowed for natural reading speed. The corpus contains both natural reading and reading in a task-solving context. For compatibility with the Dundee corpus, we only use the subset of the data, where humans were encouraged to read more naturally. This subset contains 700 sentences. This part of the Zuco corpus contains positive, negative or neutral sentences from the Stanford Sentiment Treebank (Socher et al., 2013) for passive reading, to analyze the elicitation of emotions and opinions during reading. As a control condition, the subjects sometimes had to rate the quality of the described movies; in approximately 10% of the cases. The Zuco corpus also contains instances where subjects were presented with Wikipedia sentences that contained semantic relations such as employer, award and job_title (Culotta et al., 2006) . The control condition for this tasks consisted of multiple-choice questions about the content of the previous sentence; again, approximately 10% of all sentences were followed by a question. Preprocessing of eye-tracking data Mean fixation duration (MEAN FIX DUR) is extracted from the Dundee Corpus. For Zuco, we divide total reading time per word token with the number of fixations to obtain mean fixation duration. The mean fixation duration is selected empirically among gaze duration (sum of all fixations in the first pass reading of the a word) and total fixation duration, and n fixations. Then we average these numbers for all readers of the corpus to get a more robust average processing time. Eye-tracking is known to correlate with word frequency (Rayner and Duffy, 1988) . We include a frequency baseline on the eye tracking text, BNC INV FREQ. The word frequencies comes from the British National Corpus (BNC) frequency lists (Kilgarriff, 1995) . We use log-transformed frequency per million. Before normalizing, we take the additive inverse of the frequency, such that rare words get a high value, making it comparable to gaze. MEAN FIX DUR and BNC INV FREQ are minmax-normalized to a value in the range 0-1. MEAN FIX DUR is normalized separately for the two eye tracking corpora. We expect the experimental bias -especially the fact that ZuCo contains reading of isolated sentences and Dundee contains longer texts -to influence the reading and therefore separate normalization should preserve the signal within each corpus better. Table 1 presents an overview of all train, development and test sets used in this paper.",
"cite_spans": [
{
"start": 21,
"end": 46,
"text": "(Hollenstein et al., 2018",
"ref_id": "BIBREF21"
},
{
"start": 931,
"end": 952,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF53"
},
{
"start": 1340,
"end": 1362,
"text": "(Culotta et al., 2006)",
"ref_id": "BIBREF11"
},
{
"start": 2124,
"end": 2148,
"text": "(Rayner and Duffy, 1988)",
"ref_id": "BIBREF43"
},
{
"start": 2305,
"end": 2323,
"text": "(Kilgarriff, 1995)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 2913,
"end": 2920,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Eye-tracking corpora",
"sec_num": "3.1"
},
{
"text": "Our first task is sentence-level sentiment classification. We note that many sentiment analysis datasets contain document-level labels or include more fine-grained annotation of text spans, say phrases or words. For compatibility with our other tasks, we focus on sentence-level sentiment analysis. We use the SemEval-2013 Twitter dataset (Wilson et al., 2013; Rosenthal et al., 2015) | NEG) . The SemEval-2013 sentiment classification task was a three-way classification task with positive, negative and neutral classes. We reduce the task to binary tasks detecting negative sentences vs. non-negative and vice versa for the positive class. Therefore the dataset size is the same for POS and NEG experiments.",
"cite_spans": [
{
"start": 339,
"end": 360,
"text": "(Wilson et al., 2013;",
"ref_id": "BIBREF60"
},
{
"start": 361,
"end": 384,
"text": "Rosenthal et al., 2015)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [
{
"start": 385,
"end": 391,
"text": "| NEG)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentiment classification",
"sec_num": "3.2"
},
{
"text": "Our second task is grammatical error detection. We use the First Certificate in English error detection dataset (FCE) (Yannakoudakis et al., 2011) . This dataset contains essays written by English learners during language examinations, where any grammatical errors have been manually annotated by experts. Rei and Yannakoudakis (2016) converted the dataset for a sequence labeling task and we use their splits for training, development and testing. Similarly to Rei and S\u00f8gaard (2018), we perform sentence-level binary classification of sentences that need some editing vs. grammatically correct sentences. We do not use the tokenlevel labels for training our model.",
"cite_spans": [
{
"start": 118,
"end": 146,
"text": "(Yannakoudakis et al., 2011)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammatical error detection",
"sec_num": "3.3"
},
{
"text": "Our third and final task is detection of abusive language; or more specifically, hate speech detection. We use the datasets of Waseem (2016) and Waseem and Hovy (2016) . The former contains 6,909 tweets; the latter 14,031 tweets. They are manually annotated for sexism and racism. In this study, sexism and racism are conflated into one category in both datasets. Both datasets are split in train, development and test splits consisting of 80%, 10% and 10% of the tweets respectively. function regularized by information about human attention, and finally, (c) a second baseline using frequency information as a proxy for human attention and using the same regularization scheme as in our human attention model.",
"cite_spans": [
{
"start": 127,
"end": 140,
"text": "Waseem (2016)",
"ref_id": "BIBREF57"
},
{
"start": 145,
"end": 167,
"text": "Waseem and Hovy (2016)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hate speech detection",
"sec_num": "3.4"
},
{
"text": "Hyperparameters Basic hyper-parameters such as number of hidden layers, layer size, and activation functions were following the settings of Rei and S\u00f8gaard (2018) . The dimensionality of our word embedding layer was set to size 300, and we use publicly available pre-trained Glove word embeddings (Pennington et al., 2014) that we finetune during training. The dimensionality of the character embedding layer was set to 100. The recurrent layers in the character-level component have dimensionality 100; the word-level recurrent layers dimensionality 300. The dimensionality of our feed-forward layer, leading to reduced combined representations h i , is 200, and the attention layer has dimensionality 100. Three hyper-parameters, however, we tune for each architecture and for each task, by measuring sentence-level F 1 -scores on the development sets. These are: (a) learning rate, (b) \u03bb in Equation 12, i.e., controlling the relative importance of the attention regularization, and (c) the probability of sampling data from the eye-tracking corpus during training.",
"cite_spans": [
{
"start": 140,
"end": 162,
"text": "Rei and S\u00f8gaard (2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For all tasks and all conditions (baseline, frequency-informed baseline, and our human attention model), we perform a grid search over learning rates [ .01 .1 1. ], L att weight \u03bb values [ .2 .4 .6 .8 1. ], and probability of sampling from the eye-tracking corpus [ .125 .25 .5 1., decreasing ] -where decreasing means that the probability of sampling from the eye-tracking corpus initially is 0.5, but drops linearly for each epoch ( 1 E+1 ; see 2.1. We apply the models with the best average F 1 scores over three random seeds on the validation data, to our test sets.",
"cite_spans": [
{
"start": 264,
"end": 274,
"text": "[ .125 .25",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Initialization Our models are randomly initialized. This leads to some variance in performance across different runs. We therefore report averages over 10 runs in our experiments below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Our performance metric across all our experiments is the sentence-level F 1 score. We report precision, recall and F 1 scores for all tasks in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Our main finding is that our human attention model, based on regularization from mean fixation durations in publicly available eye-tracking corpora, consistently outperforms the recurrent architecture with learned attention functions. The improvements over both baseline and BNC frequency are significant (p < 0.01) using bootstrapping (Calmettes et al., 2012) over all tasks, with one seed. The mean error reduction over the baseline is 4.5%.",
"cite_spans": [
{
"start": 336,
"end": 360,
"text": "(Calmettes et al., 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Unsurprisingly, knowing that human attention helps guide our recurrent architecture, the frequency-informed baseline is also better than the non-informed baseline across the board, but the human attention model is still significantly better across all tasks (p < 0.01). For all tasks except negative sentiment, we note that generally, most of the improvements over the learned attention baseline for the gaze-informed models, are due to improvements in recall. Precision is not worse, but we do not see any larger improvements on preci-sion either. For the negative SEMEVAL tasks, we also see larger improvements for precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The observation that improvements are primarily due to increased recall, aligns well with the hypothesis that human attention serves as an efficient regularization, preventing overfitting to surface statistical regularities that can lead the network to rely on features that are not there at test time (Globerson and Roweis, 2006) , at the expense of target class precision.",
"cite_spans": [
{
"start": 302,
"end": 330,
"text": "(Globerson and Roweis, 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We illustrate the differences between our baseline models and the model with gaze-informed attention by the attention weights of an example sentence. Though it is a single, cherry-picked example, it is representative of the general trends we observe in the data, when manually inspecting attention patterns. Table 3 presents a coarse visualization of the attention weights of six different models, namely our baseline architecture and the architecture with gaze-informed attention, trained on three different tasks: hate speech detection, negative sentiment classification, and error detection. The sentence is a positive hate speech example from the Waseem and Hovy (2016) development set. The words with more attention than the sentence average are bold-faced.",
"cite_spans": [],
"ref_spans": [
{
"start": 308,
"end": 315,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "First note that the baseline models only attend to one or two coherent text parts. This pattern was very consistent across all the sentences we examined. This pattern was not observed with gazeinformed attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "Our second observation is that the baseline models are more likely to attend to stop words than gaze-informed attention. This suggests that gazeinformed attention has learned to simulate human attention to some degree. We also see many differences between the jointly learned task-specific, gaze-informed attention functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "The gaze-informed hate speech classifier, for example, places considerable attention on BUT, which in this case is a passive-aggressive hate speech indicator. It also gives weight to double standards and certain rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "The gaze-informed sentiment classifier, on the other hand, focuses more on sorry I am not sexist which, in isolation, reads like an apologetic disclaimer. This model also gives weight to double standards and certain rules",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "The gaze-informed grammatical error detection model gives attention to standards, which is ungrammatical, because of the morphological number disagreement with its determiner a; it also gives attention to certain rules, which is disagreeing, again in number, with there's. It also gives attention to the non-word fem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "Overall, this, in combination with our results in Table 3 , suggests that the regularization effect from human attention enables our architecture to learn to better attend to the most relevant aspects of sentences for the target tasks. In other words, human attention provides the inductive bias that makes learning possible.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "Gaze in NLP It has previously been shown that several NLP tasks benefit from gaze information, including part-of-speech tagging Barrett et al., 2016a) , prediction of multiword expressions (Rohanian et al., 2017) and sentiment analysis (Mishra et al., 2017b) .",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "Barrett et al., 2016a)",
"ref_id": "BIBREF1"
},
{
"start": 236,
"end": 258,
"text": "(Mishra et al., 2017b)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and related work",
"sec_num": "7"
},
{
"text": "Gaze information and other measures from psycholinguistics have been used in different ways in NLP. Some authors have used discretized, single features Goldwater, 2011, 2013; Plank, 2016; Klerke et al., 2016) , whereas others have used multidimensional, continuous values (Barrett et al., 2016a; Bingel et al., 2016) . We follow Gonzalez-Garduno and S\u00f8gaard (2018) in using a single, continuous feature. We did not experiment with other representations, however. Specifically, we only considered the signal from token-level, normalized mean fixation durations.",
"cite_spans": [
{
"start": 152,
"end": 174,
"text": "Goldwater, 2011, 2013;",
"ref_id": null
},
{
"start": 175,
"end": 187,
"text": "Plank, 2016;",
"ref_id": "BIBREF40"
},
{
"start": 188,
"end": 208,
"text": "Klerke et al., 2016)",
"ref_id": "BIBREF27"
},
{
"start": 272,
"end": 295,
"text": "(Barrett et al., 2016a;",
"ref_id": "BIBREF1"
},
{
"start": 296,
"end": 316,
"text": "Bingel et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 329,
"end": 364,
"text": "Gonzalez-Garduno and S\u00f8gaard (2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and related work",
"sec_num": "7"
},
{
"text": "Fixation duration is a feature that carries an enormous amount of information about the text and the language understanding process. Carpenter and Just (1983) show that readers are more likely to fixate on open-class words that are not predictable from context, and Kliegl et al. (2004) show that a higher cognitive load results in longer fixation durations. Fixations before skipped words are shorter before short or high-frequency words and longer before long or low-frequency words in comparison with control fixations (Kliegl and Engbert, 2005 ). Many of these findings suggest correlations with syntactic information, and many authors have confirmed that gaze information is useful to discriminate between syntactic phenomena (Demberg and Keller, 2008; Barrett and S\u00f8gaard, 2015a,b) .",
"cite_spans": [
{
"start": 133,
"end": 158,
"text": "Carpenter and Just (1983)",
"ref_id": "BIBREF9"
},
{
"start": 266,
"end": 286,
"text": "Kliegl et al. (2004)",
"ref_id": "BIBREF29"
},
{
"start": 522,
"end": 547,
"text": "(Kliegl and Engbert, 2005",
"ref_id": "BIBREF28"
},
{
"start": 731,
"end": 757,
"text": "(Demberg and Keller, 2008;",
"ref_id": "BIBREF13"
},
{
"start": 758,
"end": 787,
"text": "Barrett and S\u00f8gaard, 2015a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and related work",
"sec_num": "7"
},
{
"text": "Gaze data has also been used in the context of sentiment analysis before (Mishra et al., 2017b,a) . Mishra et al. (2017b) augmented a sentiment analysis system with eye-tracking features, including first fixation durations and fixation counts. They show that fixations not only have an impact in detecting sentiment, but also improve sarcasm detection. They train a convolutional neural network that learns features from both gaze and text and uses them to classify the input text (Mishra et al., 2017a) . On a related note, Raudonis et al. (2013) developed a emotion recognition system from visual stimulus (not text) and showed that features such as pupil size and motion speed are relevant to accurately detect emotions from eye-tracking data. Wang et al. (2017) use variables shown to correlate with human attention, e.g. surprisal, to guide the attention for sentence representations.",
"cite_spans": [
{
"start": 73,
"end": 97,
"text": "(Mishra et al., 2017b,a)",
"ref_id": null
},
{
"start": 100,
"end": 121,
"text": "Mishra et al. (2017b)",
"ref_id": "BIBREF35"
},
{
"start": 481,
"end": 503,
"text": "(Mishra et al., 2017a)",
"ref_id": "BIBREF34"
},
{
"start": 525,
"end": 547,
"text": "Raudonis et al. (2013)",
"ref_id": "BIBREF42"
},
{
"start": 747,
"end": 765,
"text": "Wang et al. (2017)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and related work",
"sec_num": "7"
},
{
"text": "Gaze has also been used in the context of grammaticality (Klerke et al., 2015a,b) , as well as in readability assessment (Gonzalez-Garduno and S\u00f8gaard, 2018) .",
"cite_spans": [
{
"start": 57,
"end": 81,
"text": "(Klerke et al., 2015a,b)",
"ref_id": null
},
{
"start": 121,
"end": 157,
"text": "(Gonzalez-Garduno and S\u00f8gaard, 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and related work",
"sec_num": "7"
},
{
"text": "Gaze has either been used as features (Barrett and S\u00f8gaard, 2015a; Barrett et al., 2016b) or as a direct supervision signal in multi-task learning scenarios (Klerke et al., 2016; Gonzalez-Garduno and S\u00f8gaard, 2018) . We are, to the best of our knowledge, the first to use gaze to inform attention functions in recurrent neural networks. Ibraheem et al. (2017) , however, uses optimal attention to simulate human attention in an interactive machine translation scenario, and Britz et al. (2017) limit attention to a local context, inspired by findings in studies of human reading. Rei and S\u00f8gaard (2018) use auxiliary data to regularize attention functions in recurrent neural networks; not from psycholinguistics data, but using small amounts of task-specific, token-level annotations. While their motivation is very different from ours, technically our models are very related. In a different context, Das et al. (2017) investigated whether humans attend to the same regions as neural networks solving visual question answering problems. Lindsey (2017) also used human-inspired, unsupervised attention in a computer vision context.",
"cite_spans": [
{
"start": 38,
"end": 66,
"text": "(Barrett and S\u00f8gaard, 2015a;",
"ref_id": "BIBREF3"
},
{
"start": 67,
"end": 89,
"text": "Barrett et al., 2016b)",
"ref_id": "BIBREF2"
},
{
"start": 157,
"end": 178,
"text": "(Klerke et al., 2016;",
"ref_id": "BIBREF27"
},
{
"start": 179,
"end": 214,
"text": "Gonzalez-Garduno and S\u00f8gaard, 2018)",
"ref_id": "BIBREF19"
},
{
"start": 337,
"end": 359,
"text": "Ibraheem et al. (2017)",
"ref_id": "BIBREF22"
},
{
"start": 474,
"end": 493,
"text": "Britz et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 580,
"end": 602,
"text": "Rei and S\u00f8gaard (2018)",
"ref_id": "BIBREF45"
},
{
"start": 903,
"end": 920,
"text": "Das et al. (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and related work",
"sec_num": "7"
},
{
"text": "Other work on multi-purpose attention functions While our work is the first to use gaze data to guide attention in a recurrent architectures, there has recently been some work on sharing attention functions across tasks. Firat et al. (2016) , for example, share attention functions between languages in the context of multi-way neural machine translation.",
"cite_spans": [
{
"start": 221,
"end": 240,
"text": "Firat et al. (2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human-inspired attention functions",
"sec_num": null
},
{
"text": "Sentiment analysis While sentiment analysis is most often considered a supervised learning problem, several authors have leveraged other signals than annotated data to learn sentiment analysis models that generalize better. Felbo et al. (2017) , for example, use emoji prediction to pretrain their sentiment analysis models. Mishra et al. (2018) use several auxiliary tasks, including gaze prediction, for document-level sentiment analysis. There is a lot of previous work, also, leveraging information across different sentiment analysis datasets, e.g., Liu et al. (2016) .",
"cite_spans": [
{
"start": 224,
"end": 243,
"text": "Felbo et al. (2017)",
"ref_id": "BIBREF15"
},
{
"start": 325,
"end": 345,
"text": "Mishra et al. (2018)",
"ref_id": "BIBREF36"
},
{
"start": 555,
"end": 572,
"text": "Liu et al. (2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human-inspired attention functions",
"sec_num": null
},
{
"text": "Error detection In grammatical error detection, Rei (2017) used an unsupervised auxiliary language modeling task, which is similar in spirit to our second baseline, using frequency information as auxiliary data. Rei and Yannakoudakis (2017) go beyond this and evaluate the usefulness of many auxiliary tasks, primarily syntactic ones. They also use frequency information as an auxiliary task.",
"cite_spans": [
{
"start": 48,
"end": 58,
"text": "Rei (2017)",
"ref_id": "BIBREF44"
},
{
"start": 212,
"end": 240,
"text": "Rei and Yannakoudakis (2017)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human-inspired attention functions",
"sec_num": null
},
{
"text": "Hate speech detection In hate speech detection, many signals beyond the text are often leveraged (see Schmidt and Wiegand (2017) for an overview of the literature). Interestingly, many authors have used signals from sentiment analysis, e.g., Gitari et al. (2015) , motivated by the correlation between hate speech and negative sentiment. This correlation may also explain why we see the biggest improvements with gaze-informed attention on those two tasks.",
"cite_spans": [
{
"start": 102,
"end": 128,
"text": "Schmidt and Wiegand (2017)",
"ref_id": "BIBREF52"
},
{
"start": 242,
"end": 262,
"text": "Gitari et al. (2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human-inspired attention functions",
"sec_num": null
},
{
"text": "Human inductive bias Finally, our work relates to other work on providing better inductive biases for learning human-related tasks by observing humans (Tamuz et al., 2011; Wilson et al., 2015) . We believe this is a truly exciting line of research that can help us push research horizons in many ways.",
"cite_spans": [
{
"start": 151,
"end": 171,
"text": "(Tamuz et al., 2011;",
"ref_id": "BIBREF55"
},
{
"start": 172,
"end": 192,
"text": "Wilson et al., 2015)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human-inspired attention functions",
"sec_num": null
},
{
"text": "We have shown that human attention provides a useful inductive bias on machine attention in recurrent neural networks for sequence classification problems. We present an architecture that enables us to leverage human attention signals from general, publicly available eye-tracking corpora, to induce better, more robust task-specific NLP models. We evaluate our architecture and show improvements across three NLP tasks, namely sentiment analysis, grammatical error detection, and detection of abusive language. We observe that not only does human attention help models distribute their attention in a generally useful way; human attention also seems to act like a regularizer providing more robust performance across domains, and it enables better learning of task-specific attention functions through joint learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Alignment-based neural machine translation",
"authors": [
{
"first": "Tamer",
"middle": [],
"last": "Alkhouli",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Bretschner",
"suffix": ""
},
{
"first": "Jan-Thorsten",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Hethnawi",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Guta",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "54--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tamer Alkhouli, Gabriel Bretschner, Jan-Thorsten Pe- ter, Mohammed Hethnawi, Andreas Guta, and Her- mann Ney. 2016. Alignment-based neural machine translation. In Proceedings of the First Conference on Machine Translation, pages 54-65.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Weakly supervised part-ofspeech tagging using eye-tracking data",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Bingel",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "2",
"issue": "",
"pages": "579--584",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Barrett, Joachim Bingel, Frank Keller, and An- ders S\u00f8gaard. 2016a. Weakly supervised part-of- speech tagging using eye-tracking data. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (ACL), volume 2, pages 579-584.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cross-lingual transfer of correlations between parts of speech and gaze features",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 26th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "1330--1339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Barrett, Frank Keller, and Anders S\u00f8gaard. 2016b. Cross-lingual transfer of correlations be- tween parts of speech and gaze features. In Pro- ceedings of the 26th International Conference on Computational Linguistics (COLING), pages 1330- 1339.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reading behavior predicts syntactic categories",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the nineteenth conference on computational natural language learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "345--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Barrett and Anders S\u00f8gaard. 2015a. Reading be- havior predicts syntactic categories. In Proceedings of the nineteenth conference on computational natu- ral language learning (CoNLL), pages 345-249.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using reading behavior to predict grammatical functions",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2015,
"venue": "Workshop on Cognitive Aspects of Computational Language Learning (CogACLL)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Barrett and Anders S\u00f8gaard. 2015b. Using read- ing behavior to predict grammatical functions. In Workshop on Cognitive Aspects of Computational Language Learning (CogACLL), pages 1-5.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Extracting token-level signals of syntactic processing from fMRI-with an application to POS induction",
"authors": [
{
"first": "Joachim",
"middle": [],
"last": "Bingel",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "1",
"issue": "",
"pages": "747--755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachim Bingel, Maria Barrett, and Anders S\u00f8gaard. 2016. Extracting token-level signals of syntactic processing from fMRI-with an application to POS induction. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (ACL), volume 1, pages 747-755.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Identifying beneficial task relations for multi-task learning in deep neural networks",
"authors": [
{
"first": "Joachim",
"middle": [],
"last": "Bingel",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "164--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachim Bingel and Anders S\u00f8gaard. 2017. Identify- ing beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Asso- ciation for Computational Linguistics (EACL), vol- ume 2, pages 164-169.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Efficient attention using a fixed-size memory representation",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Britz",
"suffix": ""
},
{
"first": "Melody",
"middle": [
"Y"
],
"last": "Guan",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "392--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Britz, Melody Y. Guan, and Minh-Thang Lu- ong. 2017. Efficient attention using a fixed-size memory representation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 392-400.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Making do with what we have: use your bootstraps",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Calmettes",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gordon",
"suffix": ""
},
{
"first": "Sarah",
"middle": [
"L"
],
"last": "Drummond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vowler",
"suffix": ""
}
],
"year": 2012,
"venue": "The Journal of physiology",
"volume": "590",
"issue": "15",
"pages": "3403--3406",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Calmettes, Gordon B Drummond, and Sarah L Vowler. 2012. Making do with what we have: use your bootstraps. The Journal of physiol- ogy, 590(15):3403-3406.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "What your eyes do while your mind is reading. Eye movements in reading: Perceptual and language processes",
"authors": [
{
"first": "A",
"middle": [],
"last": "Patricia",
"suffix": ""
},
{
"first": "Marcel",
"middle": [
"Adam"
],
"last": "Carpenter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Just",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "275--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patricia A Carpenter and Marcel Adam Just. 1983. What your eyes do while your mind is reading. Eye movements in reading: Perceptual and language processes, pages 275-307.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Eye movements in reading words and sentences",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Clifton",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Staub",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Rayner",
"suffix": ""
}
],
"year": 2007,
"venue": "Eye Movements: A Window on Mind and Brain",
"volume": "",
"issue": "",
"pages": "341--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Clifton, Adrian Staub, and Keith Rayner. 2007. Eye movements in reading words and sentences. In Eye Movements: A Window on Mind and Brain, pages 341-371. Elsevier, Amsterdam, The Nether- lands.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Integrating probabilistic extraction models and data mining to discover relations and patterns in text",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Betz",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "296--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aron Culotta, Andrew McCallum, and Jonathan Betz. 2006. Integrating probabilistic extraction models and data mining to discover relations and patterns in text. In Proceedings of the main conference on Hu- man Language Technology Conference of the North American Chapter of the Association of Computa- tional Linguistics, pages 296-303. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Human attention in visual question answering: Do humans and deep networks look at the same regions?",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Harsh",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2017,
"venue": "Computer Vision and Image Understanding",
"volume": "163",
"issue": "",
"pages": "90--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Das, Harsh Agrawal, Lawrence Zitnick, Devi Parikh, and Dhruv Batra. 2017. Human attention in visual question answering: Do humans and deep net- works look at the same regions? Computer Vision and Image Understanding, 163:90-100.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Data from eyetracking corpora as evidence for theories of syntactic processing complexity",
"authors": [
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2008,
"venue": "Cognition",
"volume": "109",
"issue": "2",
"pages": "193--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vera Demberg and Frank Keller. 2008. Data from eye- tracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2):193-210.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multi-task learning for multiple language translation",
"authors": [
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1723--1732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for mul- tiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), volume 1, pages 1723-1732.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using millions of emoji occurrences to pretrain any-domain models for detecting emotion, sentiment, and sarcasm",
"authors": [
{
"first": "Bjarke",
"middle": [],
"last": "Felbo",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mislove",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Iyan",
"middle": [],
"last": "Rahwan",
"suffix": ""
},
{
"first": "Sune",
"middle": [],
"last": "Lehmann",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1615--1625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyan Rahwan, and Sune. Lehmann. 2017. Using millions of emoji occurrences to pretrain any-domain mod- els for detecting emotion, sentiment, and sarcasm. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 1615-1625.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multi-way, multilingual neural machine translation with a shared attention mechanism",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of 14th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "866--875",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In Proceedings of 14th Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics (NAACL), pages 866-875.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A lexicon-based approach for hate speech detection",
"authors": [
{
"first": "Njagi",
"middle": [],
"last": "Dennis Gitari",
"suffix": ""
},
{
"first": "Zhang",
"middle": [],
"last": "Zuping",
"suffix": ""
},
{
"first": "Hanyurwimfura",
"middle": [],
"last": "Damien",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Long",
"suffix": ""
}
],
"year": 2015,
"venue": "International Journal of Multimedia and Ubiquitous Engineering",
"volume": "10",
"issue": "4",
"pages": "215--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Njagi Dennis Gitari, Zhang Zuping, Hanyurwimfura Damien, and Jun Long. 2015. A lexicon-based approach for hate speech detection. International Journal of Multimedia and Ubiquitous Engineering, 10(4):215-230.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Nightmare at test time: robust learning by feature deletion",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Roweis",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "353--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Globerson and Sam Roweis. 2006. Nightmare at test time: robust learning by feature deletion. In Proceedings of the 23rd International Conference on Machine Learning (ICML), pages 353-360.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning to predict readability using eye-movement data from natives and learners",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Gonzalez",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Garduno",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second Association for the Advancement of Artificial Intelligence Conference (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ana Gonzalez-Garduno and Anders S\u00f8gaard. 2018. Learning to predict readability using eye-movement data from natives and learners. In Proceedings of the Thirty-Second Association for the Advancement of Artificial Intelligence Conference (AAAI).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "ZuCo: A simultaneous EEG and eyetracking resource for natural sentence reading. Scientific data",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Hollenstein",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Rotsztejn",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Troendle",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Pedroni",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Langer",
"suffix": ""
}
],
"year": 2018,
"venue": "Under Review",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Hollenstein, Jonathan Rotsztejn, Marius Troen- dle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. ZuCo: A simultaneous EEG and eye- tracking resource for natural sentence reading. Sci- entific data, Under Review.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning an interactive attention policy for neural machine translation",
"authors": [
{
"first": "Samee",
"middle": [],
"last": "Ibraheem",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Altieri",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samee Ibraheem, Nicholas Altieri, and John DeNero. 2017. Learning an interactive attention policy for neural machine translation. In MTSummit.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The dundee corpus",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Jo\u00ebl",
"middle": [],
"last": "Pynte",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 12th European conference on eye movement",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Kennedy, Robin Hill, and Jo\u00ebl Pynte. 2003. The dundee corpus. In Proceedings of the 12th European conference on eye movement.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "BNC database and word frequency lists",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Kilgarriff. 1995. BNC database and word fre- quency lists. Retrieved Dec. 2017.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Looking hard: Eye tracking for detecting grammaticality of automatically compressed sentences",
"authors": [
{
"first": "Sigrid",
"middle": [],
"last": "Klerke",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "H\u00e9ctor Mart\u00ednez Alonso",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015)",
"volume": "",
"issue": "",
"pages": "97--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sigrid Klerke, H\u00e9ctor Mart\u00ednez Alonso, and Anders S\u00f8gaard. 2015a. Looking hard: Eye tracking for de- tecting grammaticality of automatically compressed sentences. In Proceedings of the 20th Nordic Con- ference of Computational Linguistics (NODALIDA 2015), pages 97-105.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Reading metrics for estimating task efficiency with MT output",
"authors": [
{
"first": "Sigrid",
"middle": [],
"last": "Klerke",
"suffix": ""
},
{
"first": "Sheila",
"middle": [],
"last": "Castilho",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Sixth Workshop on Cognitive Aspects of Computational Language Learning (CogACLL)",
"volume": "",
"issue": "",
"pages": "6--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sigrid Klerke, Sheila Castilho, Maria Barrett, and An- ders S\u00f8gaard. 2015b. Reading metrics for estimat- ing task efficiency with MT output. In Proceedings of the Sixth Workshop on Cognitive Aspects of Com- putational Language Learning (CogACLL), pages 6-13.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Improving sentence compression by learning to predict gaze",
"authors": [
{
"first": "Sigrid",
"middle": [],
"last": "Klerke",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of 14th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "1528--1533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sigrid Klerke, Yoav Goldberg, and Anders S\u00f8gaard. 2016. Improving sentence compression by learning to predict gaze. In Proceedings of 14th Annual Con- ference of the North American Chapter of the As- sociation for Computational Linguistics (NAACL), pages 1528-1533.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Fixation durations before word skipping in reading",
"authors": [
{
"first": "Reinhold",
"middle": [],
"last": "Kliegl",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Engbert",
"suffix": ""
}
],
"year": 2005,
"venue": "Psychonomic Bulletin & Review",
"volume": "12",
"issue": "1",
"pages": "132--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhold Kliegl and Ralf Engbert. 2005. Fixation du- rations before word skipping in reading. Psycho- nomic Bulletin & Review, 12(1):132-138.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Length, frequency, and predictability effects of words on eye movements in reading",
"authors": [
{
"first": "Reinhold",
"middle": [],
"last": "Kliegl",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Grabner",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Rolfs",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Engbert",
"suffix": ""
}
],
"year": 2004,
"venue": "European Journal of Cognitive Psychology",
"volume": "16",
"issue": "1-2",
"pages": "262--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhold Kliegl, Ellen Grabner, Martin Rolfs, and Ralf Engbert. 2004. Length, frequency, and predictabil- ity effects of words on eye movements in reading. European Journal of Cognitive Psychology, 16(1- 2):262-284.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Pre-training attention mechanisms",
"authors": [
{
"first": "Jack",
"middle": [],
"last": "Lindsey",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS Workshop on Cognitive Informed Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jack Lindsey. 2017. Pre-training attention mecha- nisms. In NIPS Workshop on Cognitive Informed Artificial Intelligence.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Deep multi-task learning with shared memory",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "118--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Deep multi-task learning with shared memory. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 118-127.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Inferring word relevance from eyemovements of readers",
"authors": [
{
"first": "D",
"middle": [],
"last": "Tomasz",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Loboda",
"suffix": ""
},
{
"first": "J\u00f6erg",
"middle": [],
"last": "Brusilovsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brunstein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 16th international conference on intelligent user interfaces",
"volume": "",
"issue": "",
"pages": "175--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomasz D Loboda, Peter Brusilovsky, and J\u00f6erg Brun- stein. 2011. Inferring word relevance from eye- movements of readers. In Proceedings of the 16th international conference on intelligent user inter- faces, pages 175-184. ACM.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Multi-task sequence-to-sequence learning",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence-to-sequence learning. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Learning cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural network",
"authors": [
{
"first": "Abhijit",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Kuntal",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "377--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijit Mishra, Kuntal Dey, and Pushpak Bhat- tacharyya. 2017a. Learning cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural network. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1, pages 377-387.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Leveraging cognitive features for sentiment analysis",
"authors": [
{
"first": "Abhijit",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Diptesh",
"middle": [],
"last": "Kanojia",
"suffix": ""
},
{
"first": "Seema",
"middle": [],
"last": "Nagar",
"suffix": ""
},
{
"first": "Kuntal",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "156--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey, and Pushpak Bhattacharyya. 2017b. Leverag- ing cognitive features for sentiment analysis. In Pro- ceedings of the 20th SIGNLL Conference on Com- putational Natural Language Learning (CoNLL), pages 156-166.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Cognition-cognizant sentiment analysis with multitask subjectivity summarization based on annotators' gaze behavior",
"authors": [
{
"first": "Abhijit",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Srikanth",
"middle": [],
"last": "Tamilselvam",
"suffix": ""
},
{
"first": "Riddhiman",
"middle": [],
"last": "Dasgupta",
"suffix": ""
},
{
"first": "Seema",
"middle": [],
"last": "Nagar",
"suffix": ""
},
{
"first": "Kuntal",
"middle": [],
"last": "Dey",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijit Mishra, Srikanth Tamilselvam, Riddhiman Dasgupta, Seema Nagar, and Kuntal Dey. 2018. Cognition-cognizant sentiment analysis with mul- titask subjectivity summarization based on annota- tors' gaze behavior. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Unsupervised syntactic chunking with acoustic cues: computational models for prosodic bootstrapping",
"authors": [
{
"first": "K",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Pate",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "20--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John K Pate and Sharon Goldwater. 2011. Unsuper- vised syntactic chunking with acoustic cues: compu- tational models for prosodic bootstrapping. In Pro- ceedings of the 2nd Workshop on Cognitive Model- ing and Computational Linguistics, pages 20-29.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Unsupervised dependency parsing with acoustic cues",
"authors": [
{
"first": "K",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Pate",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics (TACL)",
"volume": "1",
"issue": "",
"pages": "63--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John K Pate and Sharon Goldwater. 2013. Unsu- pervised dependency parsing with acoustic cues. Transactions of the Association for Computational Linguistics (TACL), 1:63-74.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Keystroke dynamics as signal for shallow syntactic parsing",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "609--618",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank. 2016. Keystroke dynamics as signal for shallow syntactic parsing. In Proceedings of the 25th International Conference on Computational Linguistics (COLING), pages 609-618.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "412--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Yoav Goldberg, and Anders S\u00f8gaard. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (ACL), pages 412-418.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Andrius Vilkauskas, Agne Paulauskaite-Taraseviciene, and Gintare Kersulyte-Raudone",
"authors": [
{
"first": "Vidas",
"middle": [],
"last": "Raudonis",
"suffix": ""
},
{
"first": "Gintaras",
"middle": [],
"last": "Dervinis",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vidas Raudonis, Gintaras Dervinis, Andrius Vilka- uskas, Agne Paulauskaite-Taraseviciene, and Gintare Kersulyte-Raudone. 2013. Evaluation of human emotion from eye motions. Evaluation, 4(8).",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "On-line comprehension processes and eye movements in reading",
"authors": [
{
"first": "Keith",
"middle": [],
"last": "Rayner",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"A"
],
"last": "Duffy",
"suffix": ""
}
],
"year": 1988,
"venue": "Reading research: Advances in theory and practice",
"volume": "",
"issue": "",
"pages": "13--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keith Rayner and Susan A. Duffy. 1988. On-line com- prehension processes and eye movements in reading. In Reading research: Advances in theory and prac- tice, pages 13-66, New York, NY, USA. Academic Press.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Semi-supervised multitask learning for sequence labeling",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "1",
"issue": "",
"pages": "2121--2130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marek Rei. 2017. Semi-supervised multitask learn- ing for sequence labeling. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (ACL), volume 1, pages 2121- 2130.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Zero-shot sequence labeling: Transferring knowledge from sentences to tokens",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2018)",
"volume": "",
"issue": "",
"pages": "293--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marek Rei and Anders S\u00f8gaard. 2018. Zero-shot se- quence labeling: Transferring knowledge from sen- tences to tokens. Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2018), pages 293- 302.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Compositional sequence labeling models for error detection in learner writing",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "1181--1191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marek Rei and Helen Yannakoudakis. 2016. Composi- tional sequence labeling models for error detection in learner writing. Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (ACL), pages 1181-1191.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Auxiliary objectives for neural error detection models",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "33--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marek Rei and Helen Yannakoudakis. 2017. Auxiliary objectives for neural error detection models. In Pro- ceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 33-43.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Using gaze data to predict multiword expressions",
"authors": [
{
"first": "Shiva",
"middle": [],
"last": "Omid Rohanian",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Taslimipoor",
"suffix": ""
},
{
"first": "Le",
"middle": [
"An"
],
"last": "Yaneva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ha",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP)",
"volume": "",
"issue": "",
"pages": "601--609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omid Rohanian, Shiva Taslimipoor, Victoria Yaneva, and Le An Ha. 2017. Using gaze data to predict multiword expressions. In Proceedings of the In- ternational Conference Recent Advances in Natural Language Processing (RANLP), pages 601-609.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Semeval-2015 Task 10: Sentiment analysis in Twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "451--463",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif Mohammad, Alan Ritter, and Veselin Stoyanov. 2015. Semeval-2015 Task 10: Sentiment analysis in Twitter. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 451-463.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Can relevance be inferred from eye movements in information retrieval",
"authors": [
{
"first": "Jarkko",
"middle": [],
"last": "Saloj\u00e4rvi",
"suffix": ""
},
{
"first": "Ilpo",
"middle": [],
"last": "Kojo",
"suffix": ""
},
{
"first": "Jaana",
"middle": [],
"last": "Simola",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Kaski",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of WSOM",
"volume": "3",
"issue": "",
"pages": "261--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jarkko Saloj\u00e4rvi, Ilpo Kojo, Jaana Simola, and Samuel Kaski. 2003. Can relevance be inferred from eye movements in information retrieval. In Proceedings of WSOM, volume 3, pages 261-266.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Low-cost gaze interaction: ready to deliver the promises",
"authors": [
{
"first": "Javier",
"middle": [],
"last": "San Agustin",
"suffix": ""
},
{
"first": "Henrik",
"middle": [],
"last": "Skovsgaard",
"suffix": ""
},
{
"first": "John",
"middle": [
"Paulin"
],
"last": "Hansen",
"suffix": ""
},
{
"first": "Dan",
"middle": [
"Witzner"
],
"last": "Hansen",
"suffix": ""
}
],
"year": 2009,
"venue": "CHI'09 Extended Abstracts on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "4453--4458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javier San Agustin, Henrik Skovsgaard, John Paulin Hansen, and Dan Witzner Hansen. 2009. Low-cost gaze interaction: ready to deliver the promises. In CHI'09 Extended Abstracts on Human Factors in Computing Systems, pages 4453-4458. ACM.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "A survey on hate speech detection using natural language processing",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for So- cial Media, pages 1-10.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Deep multi-task learning with low level tasks supervised at lower layers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "231--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, volume 2, pages 231-235.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Adaptively learning the crowd kernel",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Tamuz",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "Ohad",
"middle": [],
"last": "Shamir",
"suffix": ""
},
{
"first": "Adam Tauman",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 28th International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "673--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Tamuz, Ce Liu, Serge Belongie, Ohad Shamir, and Adam Tauman Kalai. 2011. Adaptively learning the crowd kernel. In Proceedings of the 28th Inter- national Conference on Machine Learning (ICML), pages 673-680.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Learning sentence representation with guidance of human attention",
"authors": [
{
"first": "Shaonan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "4137--4143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaonan Wang, Jiajun Zhang, and Chengqing Zong. 2017. Learning sentence representation with guid- ance of human attention. Proceedings of the Twenty- Sixth International Joint Conference on Artificial In- telligence, pages 4137-4143.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Are you a racist or am i seeing things? Annotator influence on hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the first workshop on NLP and computational social science",
"volume": "",
"issue": "",
"pages": "138--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem. 2016. Are you a racist or am i seeing things? Annotator influence on hate speech detec- tion on Twitter. In Proceedings of the first workshop on NLP and computational social science, pages 138-142.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Hateful symbols or hateful people? Predictive features for hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL student research workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? Predictive features for hate speech detection on Twitter. In Proceedings of the NAACL student research workshop, pages 88-93.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "The human kernel",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Dann",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Lucas",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems (NIPS)",
"volume": "",
"issue": "",
"pages": "2854--2862",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Wilson, Christoph Dann, Chris Lucas, and Eric Xing. 2015. The human kernel. In Advances in neural information processing systems (NIPS), pages 2854-2862.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Sentiment analysis in Twitter",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic",
"volume": "",
"issue": "",
"pages": "312--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Zornitsa Kozareva, Preslav Nakov, Sara Rosenthal, Veselin Stoyanov, and Alan Ritter. 2013. Sentiment analysis in Twitter. In Proceedings of the Seventh International Workshop on Semantic, pages 312-320.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "A new dataset and method for automatically grading esol texts",
"authors": [
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Medlock",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "1",
"issue": "",
"pages": "180--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading esol texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics (ACL), volume 1, pages 180-189. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1568--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1568-1575.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"type_str": "table",
"text": "Overview of the tasks and datasets used.",
"content": "<table/>",
"html": null
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "for training and development. For test, we use a samedomain test set, the SemEval-2013 Twitter test set (SEMEVAL TWITTER POS | NEG), and an out-of-domain test set, SemEval-2013 SMS test set (SEMEVAL SMS POS",
"content": "<table/>",
"html": null
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "SEMEVAL SMS NEG 43.55 45.41 43.77 45.82 48.65 45.24 47.15 46.98 45.77 SEMEVAL SMS POS 65.79 50.81 57.08 65.92 51.04 57.45 65.46 52.95 58.50 SEMEVAL TWITTER NEG 57.39 26.87 35.70 62.50 28.66 37.78 60.52 30.67 40.23 SEMEVAL TWITTER POS 77.96 53.88 63.63 79.66 54.66 64.78 78.77 55.35 64.96 FCE 79.01 89.33 83.84 79.18 89.26 83.89 79.03 90.28 84.28 WASEEM (2016) 76.42 62.07 68.29 77.20 61.71 68.54 77.20 63.06 69.30 WASEEM AND HOVY (2016) 76.23 72.23 74.16 76.33 74.70 75.48 76.95 74.43 75.61",
"content": "<table><tr><td/><td/><td>BL</td><td/><td colspan=\"3\">BNC INV FREQ</td><td colspan=\"3\">MEAN FIX DUR</td></tr><tr><td>TASK</td><td>P</td><td>R</td><td>F 1</td><td>P</td><td>R</td><td>F 1</td><td>P</td><td>R</td><td>F 1</td></tr><tr><td>MEAN</td><td colspan=\"9\">68.05 57.23 60.92 69.52 58.38 61.88 69.30 59.10 62.67</td></tr><tr><td/><td/><td/><td/><td colspan=\"6\">Models In our experiments, we compare three</td></tr><tr><td/><td/><td/><td/><td colspan=\"6\">models: (a) a baseline model with automatically</td></tr><tr><td/><td/><td/><td/><td colspan=\"6\">learned attention, (b) our model with an attention</td></tr></table>",
"html": null
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "Sentence classification results. P(recision), R(ecall) and F 1 . Averages over 10 random seeds. The best average F 1 score per task is shown in bold.",
"content": "<table/>",
"html": null
},
"TABREF6": {
"num": null,
"type_str": "table",
"text": "One sentence marked as containing sexism from Waseem and Hovy (2016) development set. Using trained baseline (BL) and gaze model (MFD) for three tasks: error detection, sentiment classification, and hate speech detection. Words with more attention than sentence average are boldfaced.",
"content": "<table/>",
"html": null
}
}
}
}