ACL-OCL / Base_JSON /prefixR /json /R19 /R19-1024.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:01:17.216486Z"
},
"title": "Personality-dependent Neural Text Summarization",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Botton Da Costa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of S\u00e3o Paulo S\u00e3o Paulo",
"location": {
"country": "Brazil"
}
},
"email": "pablo.botton.costa@gmail.com"
},
{
"first": "Ivandr\u00e9",
"middle": [],
"last": "Paraboni",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of S\u00e3o",
"location": {
"settlement": "Paulo S\u00e3o Paulo",
"country": "Brazil"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In Natural Language Generation (NLG) systems, personalization strategies-i.e., the use of information about a target author to generate text that (more) closely resembles human-produced language-have long been applied to improve results. The present work addresses one such strategy-namely, the use of Big Five personality information about the target author-applied to the case of abstractive text summarization using neural sequence-tosequence models. Initial results suggest that having access to personality information does lead to more accurate (or humanlike) text summaries, and paves the way for more robust systems of this kind.",
"pdf_parse": {
"paper_id": "R19-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "In Natural Language Generation (NLG) systems, personalization strategies-i.e., the use of information about a target author to generate text that (more) closely resembles human-produced language-have long been applied to improve results. The present work addresses one such strategy-namely, the use of Big Five personality information about the target author-applied to the case of abstractive text summarization using neural sequence-tosequence models. Initial results suggest that having access to personality information does lead to more accurate (or humanlike) text summaries, and paves the way for more robust systems of this kind.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Computational approaches to text summarization may be divided into two general categories: abstractive and extractive summarization. Extractive summarization consists of selecting relevant pieces of text to compose a subset of the original sentences, whereas the more complex abstractive summarization involves interpreting the input text and rewriting its main ideas in a new, shorter version. Both strategies may be modelled as a machine learning problem by making use of unsupervised (Ren et al., 2017) , graph-based and neural methods (Wan and Yang, 2006; Cao et al., 2015) , among others. The present work focuses on the issue of neural abstractive summarization, addressing the issue of personalized text generation in systems of this kind.",
"cite_spans": [
{
"start": 487,
"end": 505,
"text": "(Ren et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 539,
"end": 559,
"text": "(Wan and Yang, 2006;",
"ref_id": "BIBREF27"
},
{
"start": 560,
"end": 577,
"text": "Cao et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text-generating systems may in principle produce always the same fixed output from a given input representation. In order to generate more natural (or 'human-like') output, however, systems of this kind will often implement a range of stylistic variation strategies. Among these, the use of computational models of human personality has emerged as a popular alternative, and it is commonly associated with the rise of the Big Five model of human personality (Goldberg, 1990 ) in many related fields.",
"cite_spans": [
{
"start": 458,
"end": 473,
"text": "(Goldberg, 1990",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Big Five model is based on the assumption that differences in personality are reflected in natural language use, and comprises five fundamental dimensions of personality: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to experience. Given its linguistic motivation, the Big Five personality traits have been addressed in a wide range of studies in both natural language understanding and generation alike. Thus, for instance, the work in Mairesse and Walker (2007) introduces PERSON-AGE, a fully-functional NLG system that produces restaurant recommendations. PERSON-AGE and many of its subsequent extensions support multiple stylistic variations that are controlled by personality information provided as an input.",
"cite_spans": [
{
"start": 469,
"end": 495,
"text": "Mairesse and Walker (2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The use of personality information for text summarization, by contrast, seems to be far less common, and we are not aware of any existing work that addresses the issue of personality-dependent neural text summarization. Based on these observations, this paper introduces a personalitydependent text summarization model that makes use of a corpus of source and summary text pairs labelled with personality information about their authors. In doing so, our goal is to use personality information to generate summaries that more closely resemble those produced by humans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is structured as follows. Section 2 discusses the issues of sequence-tosequence learning and attention mechanism for text summarization. These are the basis of our current work described in Section 3. Section 4 reports two experiments comparing the proposed models against a number of alternatives, and Section 5 presents final remarks and future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to the capacity of neural language generation models to learn and automatically induce representations from text (Rush et al., 2015; Nallapati et al., 2016; Mikolov et al., 2013) , neural abstractive summarization has attracted a great deal of attention in the field. Architectures of this kind may not only produce high-quality summaries, but may also embed external information easily (See et al., 2017) . Accordingly, these models have achieved significant results, at least in terms of intrinsic evaluation measures such as BLEU (Papineni et al., 2002) or ROUGE (Lin and Hovy, 2003) , when comparing to extractive approaches (Celikyilmaz et al., 2018) .",
"cite_spans": [
{
"start": 117,
"end": 136,
"text": "(Rush et al., 2015;",
"ref_id": "BIBREF21"
},
{
"start": 137,
"end": 160,
"text": "Nallapati et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 161,
"end": 182,
"text": "Mikolov et al., 2013)",
"ref_id": "BIBREF14"
},
{
"start": 391,
"end": 409,
"text": "(See et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 537,
"end": 560,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF16"
},
{
"start": 564,
"end": 590,
"text": "ROUGE (Lin and Hovy, 2003)",
"ref_id": null
},
{
"start": 633,
"end": 659,
"text": "(Celikyilmaz et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Neural text summarization models are often grounded on a particular kind of neural network, the sequence-to sequence architecture (Sutskever et al., 2014a; Cho et al., 2014) . In models of this kind, input text is modelled as a sequence of representations carrying any contextual information from end to end in the generation process. More formally, a sequence-to-sequence model is defined in Goodfellow et al. (2016) as a neural network that directly models the conditional probability p(y|x) of a source sequence, x 1 , ..., x n , to a target sequence, y 1 , ..., y m 1 . A basic form of sequence-to-sequence model consists of two main components: (i) an encoder that computes a representation s for each source sequence; and (ii) a decoder that generates one target token at a time, decomposing the conditional probability as follows:",
"cite_spans": [
{
"start": 130,
"end": 155,
"text": "(Sutskever et al., 2014a;",
"ref_id": "BIBREF25"
},
{
"start": 156,
"end": 173,
"text": "Cho et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 393,
"end": 417,
"text": "Goodfellow et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-sequence Learning",
"sec_num": "2.1"
},
{
"text": "p(y|x) = m j=1 (y j |y <j , s)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-sequence Learning",
"sec_num": "2.1"
},
{
"text": "A common strategy for learning sequence representations is by making use of Recurrent Neural Networks (RNN) (Rumelhart et al., 1986) . According to Hochreiter and Schmidhuber (1997) , a RNN generalizes the concept of feed-forward neural network to sequences. Given a temporal sequence of inputs (x 1 , ..., x t ), the standard RNN computes a sequence of outputs (y 1 , ..., y t ) mapped onto sequences using the following equation (Sundermeyer et al., 2012) :",
"cite_spans": [
{
"start": 108,
"end": 132,
"text": "(Rumelhart et al., 1986)",
"ref_id": "BIBREF20"
},
{
"start": 148,
"end": 181,
"text": "Hochreiter and Schmidhuber (1997)",
"ref_id": "BIBREF9"
},
{
"start": 431,
"end": 457,
"text": "(Sundermeyer et al., 2012)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-sequence Learning",
"sec_num": "2.1"
},
{
"text": "h t = sigmoid(W hx x t + W hh h t\u22121 ) y t = W yh h t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-sequence Learning",
"sec_num": "2.1"
},
{
"text": "A simple strategy for general sequence learning is to map the input sequence to a fixed-sized vector using a RNN, and then map the vector to the target sequence by using a second RNN. This may in principle be successful, but long term dependencies may make the training of the two networks difficult (Bengio et al., 1994; Hochreiter, 1998) . As an alternative, Long Short-Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) , and their simplification known as Gated Recurrent Unit (GRU) (Cho et al., 2014) are known to learn problems with long range temporal dependencies, and may therefore succeed in this setting.",
"cite_spans": [
{
"start": 300,
"end": 321,
"text": "(Bengio et al., 1994;",
"ref_id": "BIBREF0"
},
{
"start": 322,
"end": 339,
"text": "Hochreiter, 1998)",
"ref_id": "BIBREF8"
},
{
"start": 400,
"end": 434,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
},
{
"start": 498,
"end": 516,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-sequence Learning",
"sec_num": "2.1"
},
{
"text": "The goal of a LSTM/GRU network is to estimate the conditional probability p(y|x), where (x 1 , ..., x t ) is an input sequence and (y 1 , ..., y t ) is its corresponding output sequence whose length t may differ from t (Cho et al., 2014) . The conditional probability is computed by first obtaining the fixed dimensional representation v of the input sequence (x 1 , ..., x t ) given by the last hidden state of the network, and by computing the probability of (y 1 , ..., y t ) with a standard LSTM/GRU formulation in which the initial hidden state is set to the representation v of (x 1 , ..., x t ). Finally, each p(y j |s, y 1 , ..., y j\u22121 ) distribution is represented with a softmax over all the words in the vocabulary.",
"cite_spans": [
{
"start": 219,
"end": 237,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-sequence Learning",
"sec_num": "2.1"
},
{
"text": "GRUs are distinct from LSTMs in that a GRU architecture contains only a single unit to control when the current states 'forgets' a piece of information (Goodfellow et al., 2016) . Due to this simplification, GRUs can directly access all hidden states without bearing the price of a memory state (Cho et al., 2014) .",
"cite_spans": [
{
"start": 152,
"end": 177,
"text": "(Goodfellow et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 295,
"end": 313,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-sequence Learning",
"sec_num": "2.1"
},
{
"text": "GRU architectures model sequences as causal relationships through the input sequence by examining left-to-right relationships only (Goodfellow et al., 2016) . However, many sequence classification problems may require predicting an output that depends (bidirectionally) on the entire input sequence, that is, from left to right and also from right to left. This is the case, for instance, of a large number of common NLP applications that need to pay regard to contextual dependency when modelling phrases and sentences.",
"cite_spans": [
{
"start": 131,
"end": 156,
"text": "(Goodfellow et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-sequence Learning",
"sec_num": "2.1"
},
{
"text": "Bidirectional GRUs (Bi-GRUs) are applied to a wide range of tasks to scan and learn both leftto-right and right-to-left dependencies, which can capture complementary types of information from its inputs. The left and right hidden representations produced by GRUs can be linearly combined (\u03b8) to form a final representation (Goodfellow et al., 2016) :",
"cite_spans": [
{
"start": 323,
"end": 348,
"text": "(Goodfellow et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-sequence Learning",
"sec_num": "2.1"
},
{
"text": "h t = h \u2190 t \u03b8 h \u2192 t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence-to-sequence Learning",
"sec_num": "2.1"
},
{
"text": "Sequence-to-sequence architectures have been successfully applied to a wide range of tasks, including machine translation and natural text generation (Cho et al., 2014; Sutskever et al., 2014a) and, accordingly, have been subject to a great deal of extensions and improvements. Among these, the use of more context-aware sequence generation methods (Cho et al., 2014) and the use of attention mechanism to score and select words that best describe the intended output are discussed below.",
"cite_spans": [
{
"start": 150,
"end": 168,
"text": "(Cho et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 169,
"end": 193,
"text": "Sutskever et al., 2014a)",
"ref_id": "BIBREF25"
},
{
"start": 349,
"end": 367,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Mechanism",
"sec_num": "2.2"
},
{
"text": "In natural language generation, attention models as introduced in Cho et al. 2014and Sutskever et al. (2014a) are intended to generalize the text generation task so as to handle sequence pairs with different sizes of inputs and outputs. This approach, subsequently called sequence-tosequence with attention mechanism, applies a mapping strategy from a variable-length sentence to another variable-length sentence. This mapping strategy is a scoring system over the contextual information from the input sequence (Cho et al., 2014) , making a set of attention weights.",
"cite_spans": [
{
"start": 85,
"end": 109,
"text": "Sutskever et al. (2014a)",
"ref_id": "BIBREF25"
},
{
"start": 512,
"end": 530,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Mechanism",
"sec_num": "2.2"
},
{
"text": "Attention-based models (Sutskever et al., 2014b; Luong et al., 2015) are sequence-tosequence networks that employ an encoder to represent the text utterance and an attention-based decoder that generates the response, one token at a time. More specifically, neural text summarization can be viewed as a sequence-to-sequence problem (Sutskever et al., 2014a) , where a sequence of input language tokens x = x 1 , ..., x m describing the input text are mapped onto a sequence of output language tokens y 1 , ..., y n describing the target text output. The encoder is a GRU unit (Cho et al., 2014) that converts x , ..., x m into a sequence of context-sensitive embeddings b 1 , ..., b m . A general-attention decoder generates output tokens one at a time. At each time step j, the decoder generates y j based on the current hidden state s j , and then updates the hidden state s j+1 based on s j and y j . Formally, the attention decoder is defined by original equation proposed in Cho et al. (2014) :",
"cite_spans": [
{
"start": 23,
"end": 48,
"text": "(Sutskever et al., 2014b;",
"ref_id": "BIBREF26"
},
{
"start": 49,
"end": 68,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 331,
"end": 356,
"text": "(Sutskever et al., 2014a)",
"ref_id": "BIBREF25"
},
{
"start": 575,
"end": 593,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 979,
"end": 996,
"text": "Cho et al. (2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Mechanism",
"sec_num": "2.2"
},
{
"text": "s 1 = tanh(W (s) b m ) p(y j = w|x, y 1:j\u22121 ) \u03b1 exp(U [s j , c j ]) s j+1 = GRU ([\u03c6 (out) (y j ), c j ], s j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Mechanism",
"sec_num": "2.2"
},
{
"text": "where i {1, ..., m}, j {1, ..., m} and the context vector c j , is the result of general attention (Luong et al., 2015) . The matrices W (s) , W (\u03b1) , U and the embedding function \u03c6 (out) are decoder parameters.",
"cite_spans": [
{
"start": 99,
"end": 119,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 145,
"end": 148,
"text": "(\u03b1)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Mechanism",
"sec_num": "2.2"
},
{
"text": "Our basic model is generally inspired from the architecture in Cho et al. (2014) , with an added personality embedding layer. As in many other sequence-to-sequence models with attention, our model takes as an input a sentence, and produces as an output a set of words that summarizes the given input. The actual rendering of this output as structured text is presently not addressed.",
"cite_spans": [
{
"start": 63,
"end": 80,
"text": "Cho et al. (2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Current Work",
"sec_num": "3"
},
{
"text": "The proposed architecture is illustrated in Figure 1, which is adapted from Cho et al. 2014, and further discussed below.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 50,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Current Work",
"sec_num": "3"
},
{
"text": "In this example, B B B B represent the input sequence from the target sequence Z X, and C is the personality embedding representation. The five main components of the architecture are as follows. The input bidirectional GRU (A) produces a word-to-personality compositional representation of each word. This serves two main purposes: combining the composite sequences of words and personality information, and combining attention weights over sequences in our decoder model. The word embeddings layer (B) produces a typical word-level representation of each input word. In the present work, we make use of both random Word embeddings are complemented with induced personality embeddings (C) for each target author. The role of this layer is twofold. First, it is intended to learn the probability P (Y |X, personality), that is, the personality representation of each author for each word in the vocabulary. Second, this layer is also intended to decide which profile value should be selected (from the corpus gold standard annotation) in order to generate a summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Current Work",
"sec_num": "3"
},
{
"text": "The attention mechanism (D) attempts to learn a general representation from the most important parts of the input text at each time step. To this end, the experiments described in the next section will consider two score function alternatives: general attention and dot product.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Current Work",
"sec_num": "3"
},
{
"text": "Finally, the output bidirectional GRU (E) combines the attention weight representations, and produces a final encoding for each word. A loss function describe the overall generation probability, and it is intended to optimize the above parameters. This function is described as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Current Work",
"sec_num": "3"
},
{
"text": "1 (\u03b8, D(c), D(x, y)) = \u2212 (X,Y ) D c \u222a D pr logP (Y |X, < k i , v i >) = \u2212 (X,Y ) D c logP f r (Y |X)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Current Work",
"sec_num": "3"
},
{
"text": "The first term of the function is the negative log likelihood of observing D (c) and the second term for D (pr) . D (pr) consists of pairs where a summary is related to a profile key and its response match to the summary, and D (c) has only general text-summary pairs. < k i , v i > is the personality representation. The decoder P f r does not have shared parameters. A simple epoch-based training strategy using gradient descendent is performed.",
"cite_spans": [
{
"start": 107,
"end": 111,
"text": "(pr)",
"ref_id": null
},
{
"start": 116,
"end": 120,
"text": "(pr)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Current Work",
"sec_num": "3"
},
{
"text": "We envisaged two experiments on neural text summarization based on the model described in the previous section. The first experiment aims to assess whether a general or a dot product attention mechanism is more suitable to the task. The second experiment focuses on our main research question, that is, on whether the use of personality information does improve summarization results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "As in many (or most) sequence-to-sequence approaches to text generation, our work focuses on the selection of text segments to compose an abstract summary, but it does not address the actual rendering of the final output text, which would normally require additional post-processing. Each of the two experiments is discussed in turn in the following sections, but first we describe the dataset taken as their basis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "We make use of the text and caption portions of the b5 corpus in Ramos et al. (2018) , called b5-text and b5-caption. The corpus conveys 1510 multiand single-sentence image description pairs, all of which labelled with Big Five personality information about their authors. Table 1 summarizes the corpus descriptive statistics.",
"cite_spans": [
{
"start": 65,
"end": 84,
"text": "Ramos et al. (2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 273,
"end": 280,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "The corpus was elicited from a crowd sourcing task in which participants were requested to provide both long and short descriptions for 10 stimulus images taken from GAPED, a database of images classified by valence and normative significance designed to elicit various reactions (Dan- Glauser and Scherer, 2011). From a set of 10 selected images with valence degrees in the 3 to 54 range, participants were first instructed to describe everything that they could see in the scene (e.g., as if helping a visually-impaired person to understand the picture) and, subsequently, were requested to summarize it in a single sentence (similar to a picture caption.) An example of stimulus image is illustrated in Figure 2 . We notice however that in the present work we only consider the text elicited from these images, and not the images themselves.",
"cite_spans": [],
"ref_spans": [
{
"start": 706,
"end": 714,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "Based on scenes as in Figure 2 , the following is a possible long description (translated from the Portuguese original text) of the kind found in the corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "'A black baby, about one year old. He's in a cradle. He is dressed in a dirty blue blouse, on a pink sheet, without a pillow. A blue blanket is next to the baby. It seems that he has not taken a shower for a while.'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "A single-sentence summary for the same scene (and which would have been written by the same participant in the data collection task) may be represented as the following example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "'A sad-looking baby.'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "In the experiments described in the next sections, texts were pre-processed by removing punctuation and numerical symbols. In addition to that, the first data split performed for the purpose of cross-validation is shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 223,
"end": 230,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "In Encoder-Decoder Recurrent Neural Networks, the global attention mechanism may be seen as a model-inferred context vector computed as a weighted average of all inputs by making use of a score function. The choice of score function may have a great impact on the overall performance of the model, and for that reason in what follows we examine two alternatives: using the dot product over the context vectors of the source, and using the learned representation over the context states. To this end, our first experiment evaluates our basic summarization model (cf. the previous section) in two versions, namely, using general and dot product attention mechanisms. Both of these models, hereby called sDot and sGen, will make use of encoder/decoder randomized word embedding of size 300, and two encoder/decoder hidden units of size 600.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: Basic Neural Summarization with Attention Mechanism",
"sec_num": "4.2"
},
{
"text": "Both models were trained using Adam optimization with mini batches of size 128. The initial learning rate was set to 0.0001 with a gradient clipping based on the norm of the values. We also applied different learning rates for the decoder module, set to five times the learning rate of the encoder. In order to reduce over-fitting, a 0.5 drop-out regularization was applied to both embedding layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: Basic Neural Summarization with Attention Mechanism",
"sec_num": "4.2"
},
{
"text": "Model optimization was performed by using gradient descendent with masked loss, and by applying early stopping when the BLEU scores over the evaluation dataset did not increase for 20 epochs. Except for the embedding layer, all other Table 3 : 10-fold cross validation BLEU scores for text summarization using dot product (sDot) and general (sGen) attention. the best result is highlighted.",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 1: Basic Neural Summarization with Attention Mechanism",
"sec_num": "4.2"
},
{
"text": "Model BLEU sGen 13.88 sDot 13.63 parameters were initialized by sampling from a uniform distribution U (\u2212sqrt(3/n), sqrt(3/n)), where n is the parameter dimension. We performed 10-fold cross-validation over our corpus data, and we compared the output summaries produced by both models using BLEU 2 . Results are presented in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 1: Basic Neural Summarization with Attention Mechanism",
"sec_num": "4.2"
},
{
"text": "From these results, we notice that the attention mechanism based on the general function in sGen outperforms the use of dot function in sDot. Although the difference is small, the use of a generalized network to learn how to align the contextual information is superior to simply concatenating contextual information obtained from the global weights. Based on these results, the general attention strategy will be our choice for the next experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: Basic Neural Summarization with Attention Mechanism",
"sec_num": "4.2"
},
{
"text": "Our second and main experiment assesses the use of personality information in text summarization. To this end, two models are considered: the full personality-aware model presented in Section 3, hereby called sPers, and a simplified baseline version of the same architecture without access to personality information, hereby called sBase. In doing so, our goal is to show that summaries produced by sPers resemble the human-made texts (as seen in the corpus) more closely than those produced by sBase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Personality-dependent Summarization",
"sec_num": "4.3"
},
{
"text": "Both sPers and sBase make use of pre-trained skip-gram 300 word embeddings for the Brazilian Portuguese language taken from Hartmann et al. (2017) . Both models also make use of encoder/decoder randomized word embedding of size 300, and two encoder/decoder hidden units of size 600 with general attention. Table 4 : 10-fold cross validation BLEU scores for text summarization with (sPers) and without (sBase) personality information. The best result is highlighted.",
"cite_spans": [
{
"start": 124,
"end": 146,
"text": "Hartmann et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 306,
"end": 313,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 2: Personality-dependent Summarization",
"sec_num": "4.3"
},
{
"text": "Model BLEU sBase 14.21 sPers 14.58",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Personality-dependent Summarization",
"sec_num": "4.3"
},
{
"text": "All optimization, training and other basic procedures are the same as in the previous experiment. Results are presented in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 2: Personality-dependent Summarization",
"sec_num": "4.3"
},
{
"text": "We notice that personality-dependent summarization as provided by sPers outperforms standard summarization (i.e., with no access to personality information) as provided by sBase. Although the difference is once again small (which may be explained by the limited size of our dataset), this outcome offers support to our main research hypothesis by illustrating that the use of author personality information may improve summarization accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Personality-dependent Summarization",
"sec_num": "4.3"
},
{
"text": "As a means to illustrate the kinds of output that may be produced by our models, Table 5 presents a number of examples taken from the original corpus summaries, and the corresponding summaries obtained from the same input by making use of the sBase baseline and by the personality-dependent sPers models. For ease of illustration, the examples are informally grouped into three error categories (small, moderate and large) according to the distance between the corpus summaries and their sPers counterparts, and are presented in both original (Portuguese) and translated (English) forms.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selected Examples",
"sec_num": "4.4"
},
{
"text": "This paper addressed the use of Big Five personality information about the target author to generate personalized summaries in neural sequence-tosequence text summarization. The model -consisting of two bidirectional GRUs, word embeddings and attention mechanism -was evaluated in two versions, namely, with and without an additional personality embedding layer. Initial results suggest that having access to personality information does lead to more accurate (or human-like) text summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Remarks",
"sec_num": "5"
},
{
"text": "The use of personality information is of course only one among many possible personalization Table 5 : Selected examples taken from the corpus, baseline (sBase) and personality-dependent (sPers) summarization models, grouped by distance (small, moderate or large) between sPers and the expected (corpus) summary in original Portuguese (Pt) and translated English (En).",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Final Remarks",
"sec_num": "5"
},
{
"text": "Error Model Summary (Pt) Summary (En) corpus homem na cerca man by fence small sBase homem idoso elderly man sPers homem na cerca man by fence corpus pessoas pedindo ajuda people asking for help moderate sBase pessoas esperando people waiting sPers pessoas aguardam atendimento people waiting for help corpus menino com um balde de terra boy with a bucket full of soil large sBase crianca com balde child with bucket sPers crianca com balde de terra child with bucket full of soil strategies for text summarization. In particular, we notice that the increasing availability of text corpora labelled with author demographics in general (e.g., gender, age, education information etc.) may in principle support a broad range of speakerdependent summarization models. Thus, as future work we intend to extend the current approach along these lines, and provide additional summarization strategies that may represent more significant gains over the standard, fixed-output summarization approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Remarks",
"sec_num": "5"
},
{
"text": "Sentences are assumed to start with a special 'start-ofsentence' token < bos > and end with an 'end-of-sequence' token < eos >.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We are aware that, although popular in machine translation and text generation, BLEU may not be the ideal metrics for the present task(Liu et al., 2011;Song et al., 2013), and that it may not co-relate well with, e.g., human judgments(Reiter and Belz, 2009).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors acknowledge support by FAPESP grant 2016/14223-0 and from the University of S\u00e3o Paulo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning long-term dependencies with gradient descent is difficult",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Patrice",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Frasconi",
"suffix": ""
}
],
"year": 1994,
"venue": "IEEE transactions on neural networks",
"volume": "5",
"issue": "2",
"pages": "157--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradi- ent descent is difficult. IEEE transactions on neural networks 5(2):157-166.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning summary prior representation for extractive summarization",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "829--833",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Furu Wei, Sujian Li, Wenjie Li, Ming Zhou, and Houfeng Wang. 2015. Learning sum- mary prior representation for extractive summariza- tion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). vol- ume 2, pages 829-833.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Deep communicating agents for abstractive summarization",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1662--1675",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers). pages 1662-1675.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On the properties of neural machine translation: Encoder-decoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation. Association for Computational Lin- guistics, Doha, Qatar, pages 103-111.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Geneva affective picture database (GAPED): a new 730-picture database focusing on valence and normative significance",
"authors": [
{
"first": "Elise",
"middle": [
"S"
],
"last": "Dan-Glauser",
"suffix": ""
},
{
"first": "Klaus",
"middle": [
"R"
],
"last": "Scherer",
"suffix": ""
}
],
"year": 2011,
"venue": "Behavior Research Methods",
"volume": "43",
"issue": "2",
"pages": "468--477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elise S. Dan-Glauser and Klaus R. Scherer. 2011. The Geneva affective picture database (GAPED): a new 730-picture database focusing on valence and nor- mative significance. Behavior Research Methods 43(2):468-477.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An alternative description of personality: The Big-Five factor structure",
"authors": [
{
"first": "R",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of Personality and Social Psychology",
"volume": "59",
"issue": "",
"pages": "1216--1229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis R. Goldberg. 1990. An alternative description of personality: The Big-Five factor structure. Jour- nal of Personality and Social Psychology 59:1216- 1229.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deep Learning",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Portuguese word embeddings: Evaluating on word analogies and natural language tasks",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Shulby",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "J\u00e9ssica",
"middle": [],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Alu\u00edsio",
"suffix": ""
}
],
"year": 2017,
"venue": "11th Brazilian Symposium in Information and Human Language Technology -STIL",
"volume": "",
"issue": "",
"pages": "122--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Hartmann, Erick Fonseca, Christopher Shulby, Marcos Treviso, J\u00e9ssica Rodrigues, and Sandra Alu\u00edsio. 2017. Portuguese word embeddings: Eval- uating on word analogies and natural language tasks. In 11th Brazilian Symposium in Information and Human Language Technology -STIL. Uberl\u00e2ndia, Brazil, pages 122-131.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The vanishing gradient problem during learning recurrent neural nets and problem solutions",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
}
],
"year": 1998,
"venue": "International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems",
"volume": "6",
"issue": "02",
"pages": "107--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter. 1998. The vanishing gradient prob- lem during learning recurrent neural nets and prob- lem solutions. International Journal of Uncer- tainty, Fuzziness and Knowledge-Based Systems 6(02):107-116.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic evaluation of summaries using n-gram co-occurrence statistics",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Chin-Ye Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL 2003. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "71--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Ye Lin and Eduard Hovy. 2003. Automatic eval- uation of summaries using n-gram co-occurrence statistics. In Proceedings of HLT-NAACL 2003. As- sociation for Computational Linguistics, Edmonton, Canada, pages 71-78.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Better evaluation metrics lead to better machine translation",
"authors": [
{
"first": "Chang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Dahlmeier",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "375--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2011. Better evaluation metrics lead to better ma- chine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing. Association for Computational Linguistics, pages 375-384.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing. Association for Compu- tational Linguistics, Lisbon, Portugal, pages 1412- 1421. https://doi.org/10.18653/v1/D15-1166.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "PER-SONAGE: Personality generation for dialogue",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Mairesse",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2007,
"venue": "45th Annual Meeting-Association For Computational Linguistics. Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "496--503",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Mairesse and Marilyn Walker. 2007. PER- SONAGE: Personality generation for dialogue. In 45th Annual Meeting-Association For Computa- tional Linguistics. Association for Computational Linguistics (ACL), Sheffield, pages 496-503.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of NAACL-HLT-2013",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Scott Wen-tau, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proc. of NAACL-HLT- 2013. Association for Computational Linguistics, Atlanta, USA, pages 746-751.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Abstractive text summarization using sequence-tosequence RNNs and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "280--290",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1028"
]
},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Ab- stractive text summarization using sequence-to- sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natu- ral Language Learning. Association for Computa- tional Linguistics, Berlin, Germany, pages 280-290. https://doi.org/10.18653/v1/K16-1028.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceddings of ACL-2002. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceddings of ACL-2002. Association for Computational Linguis- tics, Philadelphia, PA, USA, pages 311-318.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Building a corpus for personality-dependent natural language understanding and generation",
"authors": [
{
"first": "Ricelli Moreira Silva",
"middle": [],
"last": "Ramos",
"suffix": ""
},
{
"first": "Georges",
"middle": [],
"last": "Basile Stavracas",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Neto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barbosa Claudino",
"suffix": ""
},
{
"first": "Danielle",
"middle": [
"Sampaio"
],
"last": "Silva",
"suffix": ""
},
{
"first": "Ivandr\u00e9",
"middle": [],
"last": "Monteiro",
"suffix": ""
},
{
"first": "Rafael Felipe Sandroni",
"middle": [],
"last": "Paraboni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dias",
"suffix": ""
}
],
"year": 2018,
"venue": "11th International Conference on Language Resources and Evaluation (LREC-2018)",
"volume": "",
"issue": "",
"pages": "1138--1145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricelli Moreira Silva Ramos, Georges Basile Stavra- cas Neto, Barbara Barbosa Claudino Silva, Danielle Sampaio Monteiro, Ivandr\u00e9 Paraboni, and Rafael Felipe Sandroni Dias. 2018. Building a corpus for personality-dependent natural language understanding and generation. In 11th International Conference on Language Resources and Evaluation (LREC-2018). ELRA, Miyazaki, Japan, pages 1138-1145.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An investigation into the validity of some metrics for automatically evaluating natural language generation systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Anja",
"middle": [],
"last": "Belz",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "4",
"pages": "529--558",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evalu- ating natural language generation systems. Compu- tational Linguistics 35(4):529-558.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Leveraging contextual sentence relations for extractive summarization using a neural attention model",
"authors": [
{
"first": "Pengjie",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Zhumin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhaochun",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "De Rijke",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "95--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengjie Ren, Zhumin Chen, Zhaochun Ren, Furu Wei, Jun Ma, and Maarten de Rijke. 2017. Leveraging contextual sentence relations for extractive summa- rization using a neural attention model. In Proceed- ings of the 40th International ACM SIGIR Confer- ence on Research and Development in Information Retrieval. ACM, pages 95-104.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning representations by back propagating errors",
"authors": [
{
"first": "David",
"middle": [
"E"
],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
}
],
"year": 1986,
"venue": "Nature",
"volume": "323",
"issue": "",
"pages": "533--536",
"other_ids": {
"DOI": [
"10.1038/323533a0"
]
},
"num": null,
"urls": [],
"raw_text": "David E. Rumelhart, Geoffrey Hinton, and Ronald J. Williams. 1986. Learning representations by back propagating errors. Nature 323:533-536. https://doi.org/10.1038/323533a0.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "379--389",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1044"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, Sumit Chopra, and Jason We- ston. 2015. A neural attention model for abstrac- tive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics, Lisbon, Portugal, pages 379-389. https://doi.org/10.18653/v1/D15-1044.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pages 1073- 1083.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bleu deconstructed: Designing a better mt evaluation metric",
"authors": [
{
"first": "Xingyi",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2013,
"venue": "International Journal of Computational Linguistics and Applications",
"volume": "4",
"issue": "2",
"pages": "29--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingyi Song, Trevor Cohn, and Lucia Specia. 2013. Bleu deconstructed: Designing a better mt evalua- tion metric. International Journal of Computational Linguistics and Applications 4(2):29-44.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Lstm neural networks for language modeling",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Sundermeyer",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Schl\u00fcter",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2012,
"venue": "Thirteenth annual conference of the international speech communication association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Sundermeyer, Ralf Schl\u00fcter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In Thirteenth annual conference of the international speech communication association.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014a. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems. pages 3104-3112.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014b. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems. pages 3104-3112.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Improved affinity graph based multi-document summarization",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianwu",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the human language technology conference of the NAACL, Companion volume: Short papers",
"volume": "",
"issue": "",
"pages": "181--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan and Jianwu Yang. 2006. Improved affin- ity graph based multi-document summarization. In Proceedings of the human language technology con- ference of the NAACL, Companion volume: Short papers. Association for Computational Linguistics, pages 181-184.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "(A) a bidirectional GRU that maps words to personality types (B) a word embedding layer (C) a personality embedding layer (D) an attention mechanism (E) a bidirectional GRU that outputs word encodings",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Model architecture and pre-trained word embeddings. The latter are Skip-gram 300 word embeddings taken from Hartmann et al. (2017).",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Stimulus image from GAPED (Dan-Glauser and Scherer, 2011).",
"uris": null
},
"TABREF0": {
"content": "<table><tr><td>Data</td><td colspan=\"4\">Words Average Types Average</td></tr><tr><td>text</td><td>84463</td><td>559.4</td><td>37210</td><td>246.4</td></tr><tr><td colspan=\"2\">caption 4896</td><td>32.4</td><td>4121</td><td>27.3</td></tr></table>",
"num": null,
"text": "Corpus descriptive statistics.",
"html": null,
"type_str": "table"
},
"TABREF1": {
"content": "<table><tr><td colspan=\"2\">: Data split</td></tr><tr><td>Split</td><td>Samples</td></tr><tr><td>Train</td><td>1358</td></tr><tr><td>Validation</td><td>152</td></tr><tr><td>Total</td><td>1510</td></tr></table>",
"num": null,
"text": "",
"html": null,
"type_str": "table"
}
}
}
}