| { |
| "paper_id": "P19-1043", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:27:06.655679Z" |
| }, |
| "title": "This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation", |
| "authors": [ |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Yale University", |
| "location": {} |
| }, |
| "email": "r.zhang@yale.edu" |
| }, |
| { |
| "first": "Joel", |
| "middle": [], |
| "last": "Tetreault", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Yale University", |
| "location": {} |
| }, |
| "email": "joel.tetreault@grammarly.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Given the overwhelming number of emails, an effective subject line becomes essential to better inform the recipient of the email's content. In this paper, we propose and study the task of email subject line generation: automatically generating an email subject line from the email body. We create the first dataset for this task and find that email subject line generation favor extremely abstractive summary which differentiates it from news headline generation or news single document summarization. We then develop a novel deep learning method and compare it to several baselines as well as recent state-of-the-art text summarization systems. We also investigate the efficacy of several automatic metrics based on correlations with human judgments and propose a new automatic evaluation metric. Our system outperforms competitive baselines given both automatic and human evaluations. To our knowledge, this is the first work to tackle the problem of effective email subject line generation.", |
| "pdf_parse": { |
| "paper_id": "P19-1043", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Given the overwhelming number of emails, an effective subject line becomes essential to better inform the recipient of the email's content. In this paper, we propose and study the task of email subject line generation: automatically generating an email subject line from the email body. We create the first dataset for this task and find that email subject line generation favor extremely abstractive summary which differentiates it from news headline generation or news single document summarization. We then develop a novel deep learning method and compare it to several baselines as well as recent state-of-the-art text summarization systems. We also investigate the efficacy of several automatic metrics based on correlations with human judgments and propose a new automatic evaluation metric. Our system outperforms competitive baselines given both automatic and human evaluations. To our knowledge, this is the first work to tackle the problem of effective email subject line generation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Email is a ubiquitous form of online communication. An email message consists of two basic elements: an email subject line and an email body. The subject line, which is displayed to the recipient in the list of inbox messages, should tell what the email body is about and what the sender wants to convey. An effective email subject line becomes essential since it can help people manage a large number of emails. Table 1 shows an email body with three possible subject lines.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 413, |
| "end": 420, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There have been several research tracks around email usage. While much effort has been focused on email summarization (Muresan et al., 2001; Nenkova and Bagga, 2003; Rambow et al., 2004) , email keyword extraction and action detection (Turney, 2000; Lahiri et al., 2017; Work done during the internship at Grammarly. Email Body: Hi All, I would be grateful if you could get to me today via email a job description for your current role. I would like to get this to the immigration attorneys so that they can finalise the paperwork in preparation for INS filing once the UBS deal is signed. Kind regards, Subject 1: Current Job Description Needed (COMMENT: This is good because it is both informative and succinct.) Subject 2: Job Description (COMMENT: This is okay but not informative enough.) Subject 3: Request (COMMENT: This is bad because it does not contain any specific information about the request.) 2018), and email classification Alkhereyf and Rambow, 2017) , to our knowledge there is no previous work on generating email subjects. In this paper, we propose the task of Subject Line Generation (SLG): automatically producing email subjects given the email body. While this is similar to email summarization, the two tasks serve different purposes in the process of email composition and consumption. A subject line is required when the sender writes the email, while a summary is more useful for long emails to benefit the recipient. An automatically generated email subject can also be used for downstream applications such as email triaging to help people manage emails more efficiently. Furthermore, while being similar to news headline generation or news single document summarization, email subjects are generally much shorter, which means a system must have the ability to summarize with a high compression ratio (Table 2) . Therefore, we believe this task can also benefit other highly abstractive summarization such as generating section titles for long documents to improve reading comprehension speed and accuracy.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 140, |
| "text": "(Muresan et al., 2001;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 141, |
| "end": 165, |
| "text": "Nenkova and Bagga, 2003;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 166, |
| "end": 186, |
| "text": "Rambow et al., 2004)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 235, |
| "end": 249, |
| "text": "(Turney, 2000;", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 250, |
| "end": 270, |
| "text": "Lahiri et al., 2017;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 940, |
| "end": 967, |
| "text": "Alkhereyf and Rambow, 2017)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1830, |
| "end": 1840, |
| "text": "(Table 2)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To introduce the task, we build the first dataset, Annotated Enron Subject Line Corpus (AESLC), by leveraging the Enron Corpus (Klimt and Yang, 2004) and crowdsourcing. Furthermore, in order Dataset domain docs (train/val/test) avg doc words avg summary words CNN (Cheng and Lapata, 2016) News 90,266/1,220/1,093 760 46 XSum (Narayan et al., 2018a) News 204,045/11,332/11,334 431 23 Gigaword News Headline (Rush et al., 2015) News 3,799,588/394,622/381,197 31 8 Annotated Enron Subject Line Corpus Business/Personal 14,436/1,960/1,906 75 4 Table 2 : Annotated Enron Subject Line Corpus compared with other datasets.", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 149, |
| "text": "(Klimt and Yang, 2004)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 264, |
| "end": 288, |
| "text": "(Cheng and Lapata, 2016)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 325, |
| "end": 348, |
| "text": "(Narayan et al., 2018a)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 406, |
| "end": 425, |
| "text": "(Rush et al., 2015)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 431, |
| "end": 556, |
| "text": "3,799,588/394,622/381,197 31 8 Annotated Enron Subject Line Corpus Business/Personal 14,436/1,960/1,906 75 4 Table 2", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "to properly evaluate the subject, we use a combination of automatic metrics from the text summarization and machine translation fields, in addition to building our own regression-based Email Subject Quality Estimator (ESQE). Third, to generate effective email subjects, we propose a method which combines extractive and abstractive summarization using a two-stage process by Multi-Sentence Selection and Rewriting with Email Subject Quality Estimation Reward. The multi-sentence extractor first selects multiple sentences from the input email body. Extracted sentences capture salient information for writing a subject such as named entities and dates. Thereafter, the multisentence abstractor rewrites multiple selected sentences into a succinct subject line while preserving key information. For training the network, we use a multi-stage training strategy incorporating both supervised cross-entropy training and reinforcement learning (RL) by optimizing the reward provided by the ESQE model. Our contributions are threefold: (1) We introduce the task of email subject line generation (SLG) and build a benchmark dataset AESLC. 1 (2) We investigate possible automatic metrics for SLG and study their correlations with human judgments. We also introduce a new email subject quality estimation metric (ESQE). 3We propose a novel model to generate email subjects. Our automatic and human evaluations demonstrate that our model outperforms competitive baselines and approaches human-level quality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To prepare our email subject line dataset, we use the Enron dataset (Klimt and Yang, 2004) which is a collection of email messages of employees in the Enron Corporation. We use Enron because it can be released to the public and it contains business and personal type emails for which the subject line is already well-defined and useful. As shown in Table 2 , email subjects are typically much shorter than summaries generated in previ-ous news datasets. While being similar to news headline generation (Rush et al., 2015) , email subject generation is also more challenging in the sense that it deals with different types of email subjects while the first sentence of a news article is often already a good headline and summary.", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 90, |
| "text": "(Klimt and Yang, 2004)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 502, |
| "end": 521, |
| "text": "(Rush et al., 2015)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 349, |
| "end": 356, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotated Enron Subject Line Corpus", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The original Enron dataset contains 517,401 email messages from 150 user mailboxes. To extract body and subject pairs from the dataset, we take all messages from the inbox and sent folders of all mailboxes. We then perform email body cleaning, email filtering, and email de-duplication.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Preprocessing", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We first remove any content from the email body that has not been written by the author of the email. This includes automatically appended boilerplate material such as advertisements, attachments, legal disclaimers etc. Since we are interested in emails with enough information to generate meaningful subjects, we only keep emails with at least 3 sentences and 25 words in the email body. Furthermore, to ensure that the email subject truly corresponds to the content in the email body, we only take the first email of a thread and exclude replies or forward emails. So we filter out follow up messages which contain \"Original Message\" section in the email body or have subject lines starting with \"RE:\" (reply-to messages) or \"FW:\" (forward messages). Finally, we observe that the same message can be sent to multiple recipients so we remove duplicate emails to make sure there is no overlap between the train and test set. We only keep the subject and body while other information such as the sender/recipient identity can be incorporated in future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Preprocessing", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We noted that using only the original subject lines as references may be problematic for automatic evaluation purposes. First, there can be many different valid, effective subject lines for the same email, yet the original email subject is only one of them. This is similar to why automatic machine translation evaluation often relies on mul-tiple references. Second, the email subject may be too general or too vague when the sender does not put that much effort into writing. Third, the sender may assume some shared knowledge with the recipient so that the email subject contains information that cannot be found in the email body.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subject Annotation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "To address the issues above, we ask workers on Amazon Mechanical Turk to read Enron emails in our dev and test sets and write an appropriate subject line. Each email is annotated with 3 subject lines from 3 different annotators. For quality control, we manually review and reject improper email subjects such as empty subject lines, subject lines with typos, and subject lines that are too general or too vague, e.g., \"Update\", \"Schedule\", \"Attention to Detail\" because they contain no bodyspecific information and can be applied generically to many emails. We found that while three annotations are different, they often contain common keywords. To further quantify the variation among human annotations, we compute ROUGE-L F1 scores for each pair of annotations: 34.04, 33.38, 34.26.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Subject Annotation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Our model is illustrated in Figure 1 . Based on recent progress in news summarization (Chen and Bansal, 2018) , our model generates email subjects in two stages: (1) The extractor selects multiple sentences containing salient information for writing a subject ( \u00a73.1). (2) The abstractor rewrites multiple selected sentences into a succinct subject line while preserving key information ( \u00a73.2).", |
| "cite_spans": [ |
| { |
| "start": 86, |
| "end": 109, |
| "text": "(Chen and Bansal, 2018)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 28, |
| "end": 36, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Our Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We employ a multi-stage training strategy ( \u00a73.4) including a Reinforcement Learning (RL) phase because of its usefulness for text generation tasks (Ranzato et al., 2016; Bahdanau et al., 2017) to optimize the non-differentiable metrics such as ROUGE and METEOR. However, unlike ROUGE for summarization or METEOR for machine translation, there is no available automatic metric designed for email subject generation. Motivated by recent work on regression-based metrics for machine translation (Shimanaka et al., 2018) and dialog response generation , we build a neural network (ESQE) to estimate the quality of an email subject given the email body ( \u00a73.3). The estimator is pretrained and fixed during RL training phase to provide rewards for the extractor agent.", |
| "cite_spans": [ |
| { |
| "start": 148, |
| "end": 170, |
| "text": "(Ranzato et al., 2016;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 171, |
| "end": 193, |
| "text": "Bahdanau et al., 2017)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 493, |
| "end": 517, |
| "text": "(Shimanaka et al., 2018)", |
| "ref_id": "BIBREF50" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "While our model is based on Chen and Bansal (2018) , they assume that there is a one-to-one relationship between the summary sentence and the document sentence: every summary sentence can be rewritten from exactly one sentence in the document. They also use ROUGE to make extraction labels and to provide rewards in their RL training phase. In contrast, our model extracts multiple sentences and rewrites them together into a single subject line. We also use word overlap to make extraction labels and use our novel ESQE as a reward function.", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 50, |
| "text": "Chen and Bansal (2018)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For the first stage, we need to select multiple sentences from the email body which contain the necessary information for writing a subject. This task can be formulated as a sequence-to-sequence learning problem where the output sequence corresponds to the position of \"positive\" sentences in the input email body. Therefore, we use a pointer network (Vinyals et al., 2015) to first build hierarchical sentence representations during encoding and then extract \"positive\" sentences during decoding. Suppose our input is an email body D which consists of |D| sentences:", |
| "cite_spans": [ |
| { |
| "start": 351, |
| "end": 373, |
| "text": "(Vinyals et al., 2015)", |
| "ref_id": "BIBREF56" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "D = [d 1 , d 2 , . . . , d j , . . . , d |D| ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We first use a temporal CNN (Kim, 2014) to build individual sentence representations. For each sentence, we feed the sequence of its word vectors into 1-D convolutional filters with various window sizes. We then apply ReLU activation and then max-over-time pooling. The sentence representation is a concatenation of activations from all filters", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 39, |
| "text": "(Kim, 2014)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "c j = CNN(d j ), j = 1, . . . , |D|", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Then we use a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) to capture documentlevel inter-sentence information over CNN outputs:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2212 \u2192 d j = LSTM forward ( \u2212 \u2192 d j\u22121 , c j ) \u2190 \u2212 d j = LSTM backward ( \u2190 \u2212 d j+1 , c j ) d j = [ \u2212 \u2192 d j , \u2190 \u2212 d j ]", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For sentence extraction, another LSTM as decoder outputs one \"positive\" sentence at each time step t. Denoting the decoder hidden state as h t , Figure 1: Our model architecture. In this example, the input email body consists of four sentences from which the extractor selects the second and the third. The abstractor generates an email subject from the selected sentences. The quality estimator provides rewards by scoring the subject against the email body.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "we choose a \"positive\" sentence from a 2-hop attention process. First, we build a context vector e t by attending all d j :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b1 t j = v e tanh(W e d j + U e h t ) \u03b1 t = softmax(\u03b1 t ) e t = j \u03b1 t j W e d j", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Then, we get an extraction probability distribution o t over input sentences:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "o t j = v o tanh(W o d j + U o e t ) P (o t |o 1 , o 2 , . . . , o t\u22121 ) = softmax(\u00f4 t )", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where {v, W, U} are trainable parameters. We also add a trainable \"stop\" vector with the same dimension as the sentence representation. The decoder can choose to stop by pointing to this \"stop\" sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-sentence Extractor", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In the second stage, the abstractor takes the selected sentences from the extractor and rewrites them into an email subject. We implement the abstractor as a sequence-to-sequence encoderdecoder model with the bilinear multiplicative attention (Luong et al., 2015) and copy mechanism (See et al., 2017) . The copy mechanism enables the decoder to copy words directly from the input document, which is helpful to generate accurate information verbatim even for out-of-vocabulary words.", |
| "cite_spans": [ |
| { |
| "start": 243, |
| "end": 263, |
| "text": "(Luong et al., 2015)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 283, |
| "end": 301, |
| "text": "(See et al., 2017)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-sentence Abstractor", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Since there is no established automatic metric for SLG, we build our own Email Subject Quality Estimator (ESQE). Given an email body D and a potential subject for the subject s, our quality estimator outputs a real-valued Subject Quality score SQ(D, s). The email subject and the email body are fed to a temporal CNN.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Email Subject Quality Estimator", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "D = CNN(D), s = CNN(s)", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Email Subject Quality Estimator", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We concatenate the output of CNNs as the email body and subject pair representation. Then, a single layer feed-forward neural net follows to predict the quality score from the representation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Email Subject Quality Estimator", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "SQ(D, s) = FFNN([D, s])", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Email Subject Quality Estimator", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To train the estimator, we collect human evaluations on 3,490 email subjects. In order to expose the estimator to both good and bad examples, 2,278 of the 3,490 are the original subjects and the remaining 1,212 subjects are generated by an existing summarization system. Each subject has 3 human evaluation scores (the same human evaluation as explained in \u00a74.1) and we train our estimator to regress the average.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Email Subject Quality Estimator", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The inter-annotator agreement is 0.64 by Pearson's r correlation. Even though there is no value range restriction for the estimator output, we found the scores returned by our ESQE after training are bounded from 0.0 to 4.0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Email Subject Quality Estimator", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Supervised Pretraining. We pretrain the extractor and the abstractor separately using supervised learning. To this end, we first create \"proxy\" sentence labels by checking word overlap between the subject and the body sentence. For each sentence in the body, we label it as \"positive\" if there is some token overlap of non-stopwords with the subject, negative otherwise. The multi-sentence extractor is trained to predict \"positive\" sentences by minimizing the cross-entropy loss. For the multi-sentence abstractor, we create training examples by pairing the \"positive\" sentences and the original subject in the training set. Then the abstractor is trained to generate the subject by maximizing the log-likelihood. RL Training for Extractor. To formulate this RL task at this stage, we treat the extractor as an agent, while the abstractor is pretrained and fixed. The ESQE provides the reward by judging the output subject. At each time step t, it observes a state s t = (D, d o t\u22121 ), and samples an action a t to pick a sentence from the distribution in Equation 4:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "a t \u223c \u03c0 \u03b8 (s t , a t = j) = P (o t = j)", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "where \u03c0 \u03b8 denotes the policy network described in Section 3.1 with a set of trainable parameters \u03b8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The episode is finished in T actions until the extractor picks the \"end-of-extraction\" signal. Then, the abstractor generates a subject from the extracted sentences and the quality estimator calculates the score. The quality estimator is the reward received by the extractor:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "r(a 1:T ) = SQ(D, s)", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "For training, we maximize the expected reward:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L(\u03b8) = E a 1:T \u223c\u03c0 \u03b8 [r(a 1:T )]", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "with the following gradient given by the REIN-FORCE algorithm (Williams, 1992) :", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 78, |
| "text": "REIN-FORCE algorithm (Williams, 1992)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2207 \u03b8 L(\u03b8) = E \u03c0 \u03b8 [\u2207 \u03b8 log \u03c0 \u03b8 (r \u2212 b)] \u2248 T t=1 \u2207 \u03b8 log \u03c0 \u03b8 (s t , a t )(r(a 1:T ) \u2212 b t )", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "b t is the baseline reward introduced to reduce the high variance of gradients. The baseline network has the same architecture as the decoder of the extractor. But it has another set of trainable parameters \u03b8 b and predicts the reward by minimizing the following mean squared error:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L(\u03b8 b ) = (b t \u2212 r) 2", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "4 Experimental Setup", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Stage Training", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Automatic Evaluation. Since SLG is a new task, we analyze the usefulness of automatic metrics from sister tasks, and also use human evaluation. We first use automatic metrics from text summarization and machine translation: (1) ROUGE (Lin, 2004) including F1 scores of ROUGE-1, ROUGE-2, and ROUGE-L. (2) METEOR (Denkowski and Lavie, 2014) . They all rely on one or more references and measure the similarity between the output and the reference. In addition, we include ESQE, which is a reference-less metric. Human Evaluation. While those automatic scores are quick and inexpensive to calculate, only our quality estimator is designed for evaluation of subject line generation. Therefore, we also conduct an extensive human evaluation on the overall score and two aspects of email quality: informativeness and fluency. An email subject is informative if it contains accurate and consistent details with the body, and it is fluent if free of grammar errors. We show the email body along with different system outputs as potential subjects (the models are anonymous). For each subject and each aspect, the human judge chooses a rating from 1 for Poor, 2 for Fair, 3 for Good, 4 for Great. We randomly select 500 samples and have each rated by 3 human judges.", |
| "cite_spans": [ |
| { |
| "start": 234, |
| "end": 245, |
| "text": "(Lin, 2004)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 311, |
| "end": 338, |
| "text": "(Denkowski and Lavie, 2014)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To benchmark our method, we use several methods from the summarization field, including some recent state-of-the-art systems, because the email subject line can be viewed as a short summary of the email content. They can be clustered into two groups.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "(1) Unsupervised extractive or/and abstractive summarization. LEAD-2 directly uses the first two sentences as the subject line. We choose lead-2 to include both the greeting and the first sentence of main content. TextRank (Mihalcea and Tarau, 2004) and LexRank (Erkan and Radev, 2004) are two graph-based ranking models to extract the most salient sentence as the subject line. Shang et al. (2018) use a graph-based framework to extract topics and then generate a single abstractive sentence for each topic under a budget constraint.", |
| "cite_spans": [ |
| { |
| "start": 223, |
| "end": 249, |
| "text": "(Mihalcea and Tarau, 2004)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 262, |
| "end": 285, |
| "text": "(Erkan and Radev, 2004)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "(2) Neural summarization using encoderdecoder networks with attention mechanisms. (Sutskever et al., 2014; Bahdanau et al., 2015) . The Pointer-Generator Network from See et al. (2017) augments the standard encoder-decoder network by adding the ability to copy words from the source text and using the coverage loss to avoid repetitive generation. Paulus et al. (2018) propose neural intra-attention models with a mixed objec- (Narayan et al., 2018a) . It is unclear how they perform to generate email subject lines of extremely abstractive summarization. We train these models on our dataset.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 106, |
| "text": "(Sutskever et al., 2014;", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 107, |
| "end": 129, |
| "text": "Bahdanau et al., 2015)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 167, |
| "end": 184, |
| "text": "See et al. (2017)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 348, |
| "end": 368, |
| "text": "Paulus et al. (2018)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 427, |
| "end": 450, |
| "text": "(Narayan et al., 2018a)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Dev Test R-1 R-2 R-L METEOR R-1 R-2 R-L METEOR LEAD-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Our Model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation Details", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We pretrain 128-dimensional word2vec on our corpus as initialization and update word embeddings during training. We use single layer bidirectional LSTMs with 256 hidden units in all models. The convolutional sentence encoders have filters with window sizes (3,4,5) and there are 100 filters for each size. The batch size is 16 for all training phases. We use the Adam optimizer (Kingma and Ba, 2015) with learning rates of 0.001 for supervised pretraining and 0.0001 for RL. We apply gradient clipping (Pascanu et al., 2013) with L2-norm of 2.0. The training is stopped early if the validation performance is not improved for 3 consecutive epochs. All experiments are performed on a Tesla K80 GPU. All submodels can converge within 1-2 hours and 10 epochs so the whole training takes about 4 hours.", |
| "cite_spans": [ |
| { |
| "start": 502, |
| "end": 524, |
| "text": "(Pascanu et al., 2013)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation Details", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Baselines. For TextRank and LexRank, we use the sumy 2 implementation which uses the snowball stemmer, the sentence and word tokenizer from NLTK 3 . For Shang et al. 2018, we use their extension of the Multi-Sentence Compression Graph (MSCG) of Filippova (2010) and a budget of 10 words in the submodular maximization. We choose the number of communities from [1,2,3,4,5] based on the dev set and we find that 1 works best. For the Pointer-Generator Network from See et al. 2017, we follow their implementation 4 and use a batch size 16. For Paulus et al. 2018, we use an implementation from Keneshloo et al. (2018) 5 . We did not include the intra-temporal attention and the intra-decoder attention because they hurt the performance. For Hsu et al. 2018, we follow their code 6 with a batch size 16. All training is early stopped based on the dev set performance.", |
| "cite_spans": [ |
| { |
| "start": 245, |
| "end": 261, |
| "text": "Filippova (2010)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation Details", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We report the automatic metric scores against the original subject and the subjects generated by Turkers (human annotations) as references in Tables 3a and 3b respectively. Table 4 also shows the ESQE scores. Overall, our method outperforms the other baselines in all metrics except METEOR. Other systems can achieve higher ME-TEOR scores because METEOR emphasizes recall (recall weighted 9 times more than precision) and other extractive systems such as LexRank can generate longer sentences as subject lines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatic Metric Evaluation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In Table 3a , where the original subject is the singular reference, the score of our system is rated close to and even higher than the human annotation on both sets. This is because our system is trained on the original subject and is likely a better domain fit. In Table 3b , all systems use two human annotations as the reference to have a fair comparison to the human-to-human agreement in the last row. Our system output is actually rated a bit higher than the original subject. This is because the original subject can differ from the human annotation when the sender and the recipient share some background knowledge hidden from the email content. Furthermore, in the last row, the human-to-human agreement is much higher than all the system outputs and the original subject. This indicates that different annotators write Dev Test LEAD-2 1.56 1.55 TextRank (Mihalcea and Tarau, 2004) 1.59 1.59 LexRank (Erkan and Radev, 2004) 1.57 1.56 Shang et al. (2018) 2.10 2.09 See et al. (2017) 2.22 2.19 Paulus et al. (2018) 2.30 2.30 Hsu et al. (2018) 1.44 1.46 Narayan et al. (2018a) 1 Table 6 : Correlation analysis between the automatic scores and the human evaluation.", |
| "cite_spans": [ |
| { |
| "start": 864, |
| "end": 890, |
| "text": "(Mihalcea and Tarau, 2004)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 909, |
| "end": 932, |
| "text": "(Erkan and Radev, 2004)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 943, |
| "end": 962, |
| "text": "Shang et al. (2018)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 973, |
| "end": 990, |
| "text": "See et al. (2017)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 1001, |
| "end": 1021, |
| "text": "Paulus et al. (2018)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 1032, |
| "end": 1049, |
| "text": "Hsu et al. (2018)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1060, |
| "end": 1082, |
| "text": "Narayan et al. (2018a)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Table 3a", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 266, |
| "end": 274, |
| "text": "Table 3b", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 1085, |
| "end": 1092, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Automatic Metric Evaluation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "subjects with a similar choice of words. In Table 4 , ESQE still considers our system better than other baselines, while the human annotation has the best quality score. Evaluation of sub-components. Our extractor captures salient information by selecting multiple sentences from the email body. We measure its performance as a classification problem against the \"proxy\" sentence labels as explained in Section 3.4. The overall precision and recall on the test set is 74% and 42%, respectively. Out of 1906 test examples, 691 examples have more than one sentence selected, and 1626 first sentences and 973 non-first sentences are extracted. Furthermore, during RL training phase, the dev ESQE score increases from 2.30 to 2.40. Table 5 shows that our system is rated higher than the baselines on overall, informative, and fluent aspects. For overall scores, the baselines are all between 1.5 and 2.0, indicating the subjects are usually considered as poor or fair (recall that the scale is 1-4, with 4 being the highest). Our system is 2.28, while the original subject and human annotation are between 2.5 and 3.0. This means more than half of our system outputs are at least fair, and the original subject and human annotation are often good or great. We also find that in 89 out of 500 emails, our system outputs have ratings higher than or equal to the original and human annotated subjects. Furthermore, the raters prefer the human annotated subject to the original subject.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 51, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 728, |
| "end": 735, |
| "text": "Table 5", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Automatic Metric Evaluation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "It is important to check if the automatic metric scores can truly reflect the generation quality and serve as valid metrics for subject line generation. Therefore, in Table 6 , we investigate their correlations with the human evaluation. To this end, we take the average of three human ratings and then calculate Pearson's r and Spearman's \u03c1 between different automatic scores and the average human rating. We also report the inter-rater agreement in the last row by checking the correlation between the third human rating and the average of the other two. We find that the inter-rater agreement is moderate with 0.64 for Pearson's r and 0.58 for Spearman's \u03c1. We would recommend ESQE because it has the highest correlations while being referenceless. Table 7 shows examples of our model outputs. Our model works well by first picking multiple sentences containing information such as named entities and dates and then rewriting them into a succinct subject line preserving the key information. In Example 7a, our model extracts sentences with the name of the company and position \"KWI President of the Americas\". It also captures the importance of the opportunity for this position. Similarly, in Example 7b, our model identifies \"Western Power Trading\" for \"filings\". In Example 7c, our model identifies the date of degree \"December 2011\" and action item \"application\". However, we also found our model can fail on emails about novel topics, as in Example 7d where the topic is scheduling farewell drinks. Our model only captures the name of the restaurant but not the purpose and topic since it has not seen this kind of email in training.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 167, |
| "end": 174, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 752, |
| "end": 759, |
| "text": "Table 7", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Metric Correlation Analysis", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Past NLP email research has focused on summarization (Muresan et al., 2001; Nenkova and Bagga, 2003; Rambow et al., 2004; Corston-Oliver et al., 2004; Wan and McKeown, 2004; Carenini et al., 2007; Zajic et al., 2008; Carenini et al., 2008; Ulrich et al., 2009) , keyword extraction and action detection (Turney, 2000; Bennett and Carbonell, 2005; Dredze et al., 2008; Scerri et al., 2010; Loza et al., 2014; Lahiri et al., 2017; , and classification Alkhereyf and Rambow, 2017) . However, we could not find any previous work on email subject line generation. The very first study on email summarization is Muresan et al. (2001) who reduce the problem to extracting salient phrases. Later, Nenkova and Bagga (2003) , Rambow et al. (2004) , Wan and McKeown (2004) deal with the problem of email thread summarization by the sentence extraction approach.", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 75, |
| "text": "(Muresan et al., 2001;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 76, |
| "end": 100, |
| "text": "Nenkova and Bagga, 2003;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 101, |
| "end": 121, |
| "text": "Rambow et al., 2004;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 122, |
| "end": 150, |
| "text": "Corston-Oliver et al., 2004;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 151, |
| "end": 173, |
| "text": "Wan and McKeown, 2004;", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 174, |
| "end": 196, |
| "text": "Carenini et al., 2007;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 197, |
| "end": 216, |
| "text": "Zajic et al., 2008;", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 217, |
| "end": 239, |
| "text": "Carenini et al., 2008;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 240, |
| "end": 260, |
| "text": "Ulrich et al., 2009)", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 303, |
| "end": 317, |
| "text": "(Turney, 2000;", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 318, |
| "end": 346, |
| "text": "Bennett and Carbonell, 2005;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 347, |
| "end": 367, |
| "text": "Dredze et al., 2008;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 368, |
| "end": 388, |
| "text": "Scerri et al., 2010;", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 389, |
| "end": 407, |
| "text": "Loza et al., 2014;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 408, |
| "end": 428, |
| "text": "Lahiri et al., 2017;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 450, |
| "end": 477, |
| "text": "Alkhereyf and Rambow, 2017)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 606, |
| "end": 627, |
| "text": "Muresan et al. (2001)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 689, |
| "end": 713, |
| "text": "Nenkova and Bagga (2003)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 716, |
| "end": 736, |
| "text": "Rambow et al. (2004)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 739, |
| "end": 761, |
| "text": "Wan and McKeown (2004)", |
| "ref_id": "BIBREF57" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Another related line of research is natural language generation. Our task is most similar to single document summarization because the email subject line can be viewed as a short summary of the email content. Therefore, we use different summarization models as baselines with techniques such as graph-based extraction and compression, sequence-to-sequence neural abstractive summarization with the hierarchical attention, copy, and coverage mechanisms. In addition, RL has become increasingly popular for text generation to optimize the non-differentiable metrics and to reduce the exposure bias in the traditional \"teaching forcing\" supervised training (Ranzato et al., 2016; Bahdanau et al., 2017; Zhang and Lapata, 2017; Sakaguchi et al., 2017) . For example, Narayan et al. (2018b) use RL for ranking sentences in pure extractive summarization.", |
| "cite_spans": [ |
| { |
| "start": 654, |
| "end": 676, |
| "text": "(Ranzato et al., 2016;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 677, |
| "end": 699, |
| "text": "Bahdanau et al., 2017;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 700, |
| "end": 723, |
| "text": "Zhang and Lapata, 2017;", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 724, |
| "end": 747, |
| "text": "Sakaguchi et al., 2017)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 763, |
| "end": 785, |
| "text": "Narayan et al. (2018b)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Furthermore, current methods on news headline generation (Lopyrev, 2015; Tilk and Alum\u00e4e, 2017; Kiyono et al., 2017; Tan et al., 2017; Shen et al., 2017) most follow the encoder-decoder model, while our model uses a multi-sentence selection and rewriting framework.", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 72, |
| "text": "(Lopyrev, 2015;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 73, |
| "end": 95, |
| "text": "Tilk and Alum\u00e4e, 2017;", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 96, |
| "end": 116, |
| "text": "Kiyono et al., 2017;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 117, |
| "end": 134, |
| "text": "Tan et al., 2017;", |
| "ref_id": "BIBREF52" |
| }, |
| { |
| "start": 135, |
| "end": 153, |
| "text": "Shen et al., 2017)", |
| "ref_id": "BIBREF49" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this paper, we introduce the task of email subject line generation. We build a benchmark dataset (AESLC) with crowdsourced human annotations on the Enron corpus and evaluate automatic metrics for this task. We propose our model of subject generation by Multi-Sentence Selection and Rewriting with Email Subject Quality Estimation Reward. Our model outperforms several competitive baselines and approaches human-level performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In the future, we would like to generalize it to multiple domains and datasets. We are also interested in generating more effective and appropriate subjects by incorporating prior email conversations, social context, the goal and style of emails, personality, among others.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "dataset available at https://github.com/ ryanzhumich/AESLC", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/miso-belica/sumy 3 https://www.nltk.org/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/abisee/ pointer-generator 5 https://github.com/yaserkl/RLSeq2Seq 6 https://github.com/HsuWanTing/ unified-summarization", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Jimmy Nguyen and Vipul Raheja for their help in the data creation process. We also thank Dragomir Radev, Courtney Napoles, Dimitrios Alikaniotis, Claudia Leacock, Junchao Zheng, Maria Nadejde, Adam Faulkner, and three anonymous reviewers for their helpful discussion and feedback.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Here is the position description for the KWI President of the Americas Opportunity. I feel that this is a tremendous opportunity to be an integral player with a very exciting relatively early stage Applications Software company, in the very exciting and hot Energy Commodities Sector; They are already profitable, pre-IPO. This position has a great compensation package. Please get back to me if you have an interest or if you know someone who might be intrigued by this opportunity. Thanks, Dal Coger Original Subject: KWI President of the Americas Human Annotation: KWI President of the Americas Opportunity See", |
| "authors": [], |
| "year": null, |
| "venue": "ACL 2017: Position Description -the Americas Sector Opportunity Our System: KWI President of the Americas Position", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Email Body: Dear Rick, Thanks for speaking with me today. Here is the position description for the KWI President of the Americas Opportunity. I feel that this is a tremendous opportunity to be an integral player with a very exciting relatively early stage Applications Software company, in the very exciting and hot Energy Commodities Sector; They are already profitable, pre-IPO. This position has a great compensation package. Please get back to me if you have an interest or if you know someone who might be intrigued by this opportunity. Thanks, Dal Coger Original Subject: KWI President of the Americas Human Annotation: KWI President of the Americas Opportunity See et al., ACL 2017: Position Description -the Americas Sector Opportunity Our System: KWI President of the Americas Position (a) Email ID: buy-r inbox 321", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Motion to Intervene and Protest of the Western Power Trading Forum. This was filed in connection witht the ISO status report filing dealing with creditworthiness issues. 2.. Motion to Intervene and Comments of the Western Power Trading Forum. This was filed in connection with the Reliant and Mirant filing of a joint Section 206 complaint on", |
| "authors": [ |
| { |
| "first": "Email", |
| "middle": [], |
| "last": "Body", |
| "suffix": "" |
| }, |
| { |
| "first": ";", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Attached for your information are the following two filings made at FERC on Monday on behalf of WPTF: 1", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Email Body: Attached for your information are the following two filings made at FERC on Monday on behalf of WPTF: 1.. Motion to Intervene and Protest of the Western Power Trading Forum. This was filed in connection witht the ISO status report filing dealing with creditworthiness issues. 2.. Motion to Intervene and Comments of the Western Power Trading Forum. This was filed in connection with the Reliant and Mirant filing of a joint Section 206 complaint on October 18, 2001. My thanks to those who responded to the drafts with comments and suggestions. Dan Original Subject: Monday's FERC Filings Human Annotation: Two Filings Made at FERC See et al., ACL 2017: FERC filings -FERC power and at monday was filing Our System: Western Power Trading Filings (b) Email ID: dasovich-j inbox 1473", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "degree, will you please come by the Evening MBA office soon (by Tuesday, September 25 at the latest) and fill out an Application for Candidacy form? We have your fall transcript to assist you in filling out the form. Since we need your original signature, an office visit is best. Thanks, congratulations, and see you! Original Subject: Planning to graduate this semester? Human Annotation", |
| "authors": [], |
| "year": 2001, |
| "venue": "Email Body: Hi Evening MBA students, If you plan to graduate this semester for a", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Email Body: Hi Evening MBA students, If you plan to graduate this semester for a December 2001 degree, will you please come by the Evening MBA office soon (by Tuesday, September 25 at the latest) and fill out an Application for Candidacy form? We have your fall transcript to assist you in filling out the form. Since we need your original signature, an office visit is best. Thanks, congratulations, and see you! Original Subject: Planning to graduate this semester? Human Annotation: December 2001 degree See et al., ACL 2017: December application(graduate) -September 25", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "degree application (c) Email ID: dasovich-j inbox 123", |
| "authors": [ |
| { |
| "first": "Our", |
| "middle": [], |
| "last": "System", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Our System: December 2001 degree application (c) Email ID: dasovich-j inbox 123", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "we would love to toast the good times and special memories that we have shared with you over the past five years. Please join us at Teala's (W. Dallas) on Thursday", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Al", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "ACL 2017: Friday 30th and day, W. Dallas -November Our System: Teala's (d) Email", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Email Body: As our last day is Friday, November 30th, we would love to toast the good times and special memories that we have shared with you over the past five years. Please join us at Teala's (W. Dallas) on Thursday, November 29th, beginning at 5pm. Looking forward to being with you, Lara and Janel Lara Leibman Original Subject: Farewell Drinks Human Annotation: Our last day See et al., ACL 2017: Friday 30th and day, W. Dallas -November Our System: Teala's (d) Email ID: arnold-j inbox 153", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Work hard, play hard: Email classification on the avocado and enron corpora", |
| "authors": [ |
| { |
| "first": "References", |
| "middle": [], |
| "last": "Sakhar", |
| "suffix": "" |
| }, |
| { |
| "first": "Alkhereyf", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "References Sakhar Alkhereyf and Owen Rambow. 2017. Work hard, play hard: Email classification on the avocado and enron corpora. In Proceedings of TextGraphs- 11: the Workshop on Graph-based Methods for Nat- ural Language Processing.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "An actor-critic algorithm for sequence prediction", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Philemon", |
| "middle": [], |
| "last": "Brakel", |
| "suffix": "" |
| }, |
| { |
| "first": "Kelvin", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Anirudh", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In ICLR.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Detecting action-items in e-mail", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Paul", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [], |
| "last": "Bennett", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "SIGIR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul N Bennett and Jaime Carbonell. 2005. Detecting action-items in e-mail. In SIGIR.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Summarizing email conversations with clue words", |
| "authors": [ |
| { |
| "first": "Giuseppe", |
| "middle": [], |
| "last": "Carenini", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Raymond", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Giuseppe Carenini, Raymond T Ng, and Xiaodong Zhou. 2007. Summarizing email conversations with clue words. In WWW.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Summarizing emails with conversational cohesion and subjectivity", |
| "authors": [ |
| { |
| "first": "Giuseppe", |
| "middle": [], |
| "last": "Carenini", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Raymond", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Giuseppe Carenini, Raymond T Ng, and Xiaodong Zhou. 2008. Summarizing emails with conversa- tional cohesion and subjectivity. ACL.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Fast abstractive summarization with reinforce-selected sentence rewriting", |
| "authors": [ |
| { |
| "first": "Yen-Chun", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yen-Chun Chen and Mohit Bansal. 2018. Fast abstrac- tive summarization with reinforce-selected sentence rewriting. In ACL.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Neural summarization by extracting sentences and words", |
| "authors": [ |
| { |
| "first": "Jianpeng", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In ACL.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Task-focused summarization of email. Text Summarization Branches Out", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Corston-Oliver", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Ringger", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Gamon", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Campbell", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Corston-Oliver, Eric Ringger, Michael Gamon, and Richard Campbell. 2004. Task-focused sum- marization of email. Text Summarization Branches Out.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Meteor universal: Language specific translation evaluation for any target language", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Denkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the ninth workshop on statistical machine translation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Generating summary keywords for emails using topics", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hanna", |
| "suffix": "" |
| }, |
| { |
| "first": "Danny", |
| "middle": [], |
| "last": "Wallach", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Puller", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 13th international conference on Intelligent user interfaces", |
| "volume": "", |
| "issue": "", |
| "pages": "199--206", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Dredze, Hanna M Wallach, Danny Puller, and Fernando Pereira. 2008. Generating summary key- words for emails using topics. In Proceedings of the 13th international conference on Intelligent user in- terfaces, pages 199-206. ACM.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Lexrank: Graph-based lexical centrality as salience in text summarization", |
| "authors": [ |
| { |
| "first": "G\u00fcnes", |
| "middle": [], |
| "last": "Erkan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dragomir R Radev", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "journal of artificial intelligence research", |
| "volume": "22", |
| "issue": "", |
| "pages": "457--479", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G\u00fcnes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. journal of artificial intelligence re- search, 22:457-479.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Multi-sentence compression: Finding shortest paths in word graphs", |
| "authors": [ |
| { |
| "first": "Katja", |
| "middle": [], |
| "last": "Filippova", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In COLING.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "A unified model for extractive and abstractive summarization using inconsistency loss", |
| "authors": [ |
| { |
| "first": "Wan-Ting", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "" |
| }, |
| { |
| "first": "Chieh-Kai", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Ying", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kerui", |
| "middle": [], |
| "last": "Min", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In ACL.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Deep reinforcement learning for sequence to sequence models", |
| "authors": [ |
| { |
| "first": "Yaser", |
| "middle": [], |
| "last": "Keneshloo", |
| "suffix": "" |
| }, |
| { |
| "first": "Tian", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Chandan", |
| "suffix": "" |
| }, |
| { |
| "first": "Naren", |
| "middle": [], |
| "last": "Reddy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ramakrishnan", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1805.09461" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yaser Keneshloo, Tian Shi, Chandan K Reddy, and Naren Ramakrishnan. 2018. Deep reinforcement learning for sequence to sequence models. arXiv preprint arXiv:1805.09461.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Convolutional neural networks for sentence classification", |
| "authors": [ |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Source-side prediction for neural headline generation", |
| "authors": [ |
| { |
| "first": "Shun", |
| "middle": [], |
| "last": "Kiyono", |
| "suffix": "" |
| }, |
| { |
| "first": "Sho", |
| "middle": [], |
| "last": "Takase", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| }, |
| { |
| "first": "Naoaki", |
| "middle": [], |
| "last": "Okazaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Kentaro", |
| "middle": [], |
| "last": "Inui", |
| "suffix": "" |
| }, |
| { |
| "first": "Masaaki", |
| "middle": [], |
| "last": "Nagata", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1712.08302" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, and Masaaki Nagata. 2017. Source-side prediction for neural headline genera- tion. arXiv preprint arXiv:1712.08302.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "The enron corpus: A new dataset for email classification research", |
| "authors": [ |
| { |
| "first": "Bryan", |
| "middle": [], |
| "last": "Klimt", |
| "suffix": "" |
| }, |
| { |
| "first": "Yiming", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "ECML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bryan Klimt and Yiming Yang. 2004. The enron cor- pus: A new dataset for email classification research. In ECML.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Keyword extraction from emails", |
| "authors": [ |
| { |
| "first": "Shibamouli", |
| "middle": [], |
| "last": "Lahiri", |
| "suffix": "" |
| }, |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "P-H", |
| "middle": [], |
| "last": "Lai", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Natural Language Engineering", |
| "volume": "23", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shibamouli Lahiri, Rada Mihalcea, and P-H Lai. 2017. Keyword extraction from emails. Natural Language Engineering, 23(2).", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out", |
| "authors": [ |
| { |
| "first": "Chin-Yew", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. Text Summarization Branches Out.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Actionable email intent modeling with reparametrized rnns", |
| "authors": [ |
| { |
| "first": "Chu-Cheng", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Dongyeop", |
| "middle": [], |
| "last": "Kang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Gamon", |
| "suffix": "" |
| }, |
| { |
| "first": "Madian", |
| "middle": [], |
| "last": "Khabsa", |
| "suffix": "" |
| }, |
| { |
| "first": "Ahmed", |
| "middle": [], |
| "last": "Hassan Awadallah", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chu-Cheng Lin, Dongyeop Kang, Michael Gamon, Madian Khabsa, Ahmed Hassan Awadallah, and Patrick Pantel. 2018. Actionable email intent mod- eling with reparametrized rnns. In AAAI.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Generating news headlines with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Konstantin", |
| "middle": [], |
| "last": "Lopyrev", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1512.01712" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Konstantin Lopyrev. 2015. Generating news head- lines with recurrent neural networks. arXiv preprint arXiv:1512.01712.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Towards an automatic turing test: Learning to evaluate dialogue responses", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Noseworthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Iulian", |
| "middle": [], |
| "last": "Vlad Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Angelard-Gontier", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Lowe, Michael Noseworthy, Iulian Vlad Ser- ban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In ACL.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Building a dataset for summarization and keyword extraction from emails", |
| "authors": [ |
| { |
| "first": "Vanessa", |
| "middle": [], |
| "last": "Loza", |
| "suffix": "" |
| }, |
| { |
| "first": "Shibamouli", |
| "middle": [], |
| "last": "Lahiri", |
| "suffix": "" |
| }, |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "Po-Hsiang", |
| "middle": [], |
| "last": "Lai", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vanessa Loza, Shibamouli Lahiri, Rada Mihalcea, and Po-Hsiang Lai. 2014. Building a dataset for sum- marization and keyword extraction from emails. In LREC.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Effective approaches to attention-based neural machine translation", |
| "authors": [ |
| { |
| "first": "Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Textrank: Bringing order into text", |
| "authors": [ |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Tarau", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Combining linguistic and machine learning techniques for email summarization", |
| "authors": [ |
| { |
| "first": "Evelyne", |
| "middle": [], |
| "last": "Smaranda Muresan", |
| "suffix": "" |
| }, |
| { |
| "first": "Judith", |
| "middle": [ |
| "L" |
| ], |
| "last": "Tzoukermann", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Klavans", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Smaranda Muresan, Evelyne Tzoukermann, and Ju- dith L Klavans. 2001. Combining linguistic and ma- chine learning techniques for email summarization. In CoNLL.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Don't give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization", |
| "authors": [ |
| { |
| "first": "Shashi", |
| "middle": [], |
| "last": "Narayan", |
| "suffix": "" |
| }, |
| { |
| "first": "Shay", |
| "middle": [ |
| "B" |
| ], |
| "last": "Cohen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018a. Don't give me the details, just the summary! Topic-aware convolutional neural networks for ex- treme summarization. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Ranking sentences for extractive summarization with reinforcement learning", |
| "authors": [ |
| { |
| "first": "Shashi", |
| "middle": [], |
| "last": "Narayan", |
| "suffix": "" |
| }, |
| { |
| "first": "Shay", |
| "middle": [ |
| "B" |
| ], |
| "last": "Cohen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b. Ranking sentences for extractive summa- rization with reinforcement learning. In NAACL.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Facilitating email thread access by extractive summary generation", |
| "authors": [ |
| { |
| "first": "Ani", |
| "middle": [], |
| "last": "Nenkova", |
| "suffix": "" |
| }, |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Bagga", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ani Nenkova and Amit Bagga. 2003. Facilitating email thread access by extractive summary genera- tion. RANLP.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "On the difficulty of training recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Razvan", |
| "middle": [], |
| "last": "Pascanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In ICML.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "A deep reinforced model for abstractive summarization", |
| "authors": [ |
| { |
| "first": "Romain", |
| "middle": [], |
| "last": "Paulus", |
| "suffix": "" |
| }, |
| { |
| "first": "Caiming", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In ICLR.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Predicting power relations between participants in written dialog from a single thread", |
| "authors": [ |
| { |
| "first": "Vinodkumar", |
| "middle": [], |
| "last": "Prabhakaran", |
| "suffix": "" |
| }, |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vinodkumar Prabhakaran and Owen Rambow. 2014. Predicting power relations between participants in written dialog from a single thread. In ACL.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Gender and power: How gender and gender environment affect manifestations of power", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [ |
| "E" |
| ], |
| "last": "Vinodkumar Prabhakaran", |
| "suffix": "" |
| }, |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Reid", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vinodkumar Prabhakaran, Emily E Reid, and Owen Rambow. 2014. Gender and power: How gen- der and gender environment affect manifestations of power. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Summarizing email threads", |
| "authors": [ |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| }, |
| { |
| "first": "Lokesh", |
| "middle": [], |
| "last": "Shrestha", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Chirsty", |
| "middle": [], |
| "last": "Lauridsen", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Owen Rambow, Lokesh Shrestha, John Chen, and Chirsty Lauridsen. 2004. Summarizing email threads. In NAACL.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Sequence level training with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Aurelio", |
| "middle": [], |
| "last": "Marc", |
| "suffix": "" |
| }, |
| { |
| "first": "Sumit", |
| "middle": [], |
| "last": "Ranzato", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Chopra", |
| "suffix": "" |
| }, |
| { |
| "first": "Wojciech", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zaremba", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level train- ing with recurrent neural networks. In ICLR.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "A neural attention model for abstractive sentence summarization", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [ |
| "M" |
| ], |
| "last": "Rush", |
| "suffix": "" |
| }, |
| { |
| "first": "Sumit", |
| "middle": [], |
| "last": "Chopra", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Grammatical error correction with neural reinforcement learning", |
| "authors": [ |
| { |
| "first": "Keisuke", |
| "middle": [], |
| "last": "Sakaguchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Post", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Keisuke Sakaguchi, Matt Post, and Benjamin Van Durme. 2017. Grammatical error correction with neural reinforcement learning. In IJCNLP.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Classifying action items for semantic email", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Scerri", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "Gossen", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "Siegfried", |
| "middle": [], |
| "last": "Handschuh", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Scerri, Gerhard Gossen, Brian Davis, and Siegfried Handschuh. 2010. Classifying action items for semantic email. In LREC.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Get to the point: Summarization with pointergenerator networks", |
| "authors": [ |
| { |
| "first": "Abigail", |
| "middle": [], |
| "last": "See", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In ACL.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Unsupervised abstractive meeting summarization with multisentence compression and budgeted submodular maximization", |
| "authors": [ |
| { |
| "first": "Guokan", |
| "middle": [], |
| "last": "Shang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wensi", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Zekun", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Tixier", |
| "suffix": "" |
| }, |
| { |
| "first": "Polykarpos", |
| "middle": [], |
| "last": "Meladianos", |
| "suffix": "" |
| }, |
| { |
| "first": "Michalis", |
| "middle": [], |
| "last": "Vazirgiannis", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean-Pierre", |
| "middle": [], |
| "last": "Lorr\u00e9", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guokan Shang, Wensi Ding, Zekun Zhang, Antoine Tixier, Polykarpos Meladianos, Michalis Vazir- giannis, and Jean-Pierre Lorr\u00e9. 2018. Unsuper- vised abstractive meeting summarization with multi- sentence compression and budgeted submodular maximization. In ACL.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Recent advances on neural headline generation", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Shi-Qi", |
| "suffix": "" |
| }, |
| { |
| "first": "Yan-Kai", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Cun-Chao", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhi-Yuan", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Mao-Song", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Journal of Computer Science and Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shi-Qi Shen, Yan-Kai Lin, Cun-Chao Tu, Yu Zhao, Zhi-Yuan Liu, Mao-Song Sun, et al. 2017. Recent advances on neural headline generation. Journal of Computer Science and Technology.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Metric for automatic machine translation evaluation based on universal sentence representations", |
| "authors": [ |
| { |
| "first": "Hiroki", |
| "middle": [], |
| "last": "Shimanaka", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomoyuki", |
| "middle": [], |
| "last": "Kajiwara", |
| "suffix": "" |
| }, |
| { |
| "first": "Mamoru", |
| "middle": [], |
| "last": "Komachi", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "NAACL: Student Research Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroki Shimanaka, Tomoyuki Kajiwara, and Mamoru Komachi. 2018. Metric for automatic machine translation evaluation based on universal sentence representations. In NAACL: Student Research Work- shop.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Sequence to sequence learning with neural networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc V", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In NIPS.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "From neural sentence summarization to headline generation: a coarse-to-fine approach", |
| "authors": [ |
| { |
| "first": "Jiwei", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojun", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianguo", |
| "middle": [], |
| "last": "Xiao", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "IJCAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. From neural sentence summarization to headline generation: a coarse-to-fine approach. In IJCAI.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Lowresource neural headline generation", |
| "authors": [ |
| { |
| "first": "Ottokar", |
| "middle": [], |
| "last": "Tilk", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanel", |
| "middle": [], |
| "last": "Alum\u00e4e", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1707.09769" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ottokar Tilk and Tanel Alum\u00e4e. 2017. Low- resource neural headline generation. arXiv preprint arXiv:1707.09769.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Learning algorithms for keyphrase extraction", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Information retrieval", |
| "volume": "2", |
| "issue": "4", |
| "pages": "303--336", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter D Turney. 2000. Learning algorithms for keyphrase extraction. Information retrieval, 2(4):303-336.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Regression-based summarization of email conversations", |
| "authors": [ |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Ulrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Giuseppe", |
| "middle": [], |
| "last": "Carenini", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Murray", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond T", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "ICWSM", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jan Ulrich, Giuseppe Carenini, Gabriel Murray, and Raymond T Ng. 2009. Regression-based summa- rization of email conversations. In ICWSM.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Pointer networks. In NIPS", |
| "authors": [ |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Meire", |
| "middle": [], |
| "last": "Fortunato", |
| "suffix": "" |
| }, |
| { |
| "first": "Navdeep", |
| "middle": [], |
| "last": "Jaitly", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In NIPS.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Generating overview summaries of ongoing email thread discussions", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| }, |
| { |
| "first": "Kathy", |
| "middle": [], |
| "last": "Mckeown", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Wan and Kathy McKeown. 2004. Generating overview summaries of ongoing email thread discus- sions. In COLING.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Ronald", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Machine learning", |
| "volume": "8", |
| "issue": "3-4", |
| "pages": "229--256", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine learning, 8(3-4):229-256.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "Single-document and multi-document summarization techniques for email threads using sentence compression", |
| "authors": [ |
| { |
| "first": "Bonnie", |
| "middle": [ |
| "J" |
| ], |
| "last": "David M Zajic", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Dorr", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Information Processing & Management", |
| "volume": "44", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David M Zajic, Bonnie J Dorr, and Jimmy Lin. 2008. Single-document and multi-document summariza- tion techniques for email threads using sentence compression. Information Processing & Manage- ment, 44(4).", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Sentence simplification with deep reinforcement learning", |
| "authors": [ |
| { |
| "first": "Xingxing", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In EMNLP.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null, |
| "text": "An email with three possible subject lines." |
| }, |
| "TABREF3": { |
| "html": null, |
| "content": "<table><tr><td>tive of supervised training and policy learning.</td></tr><tr><td>Hsu et al. (2018) extend the pointer-generator net-</td></tr><tr><td>work by unifying the sentence-level attention and</td></tr><tr><td>the word-level attention. Narayan et al. (2018a)</td></tr><tr><td>use a topic-based convolutional neural network to</td></tr><tr><td>generate extreme summarization for news docu-</td></tr><tr><td>ments. While they are quite successful in sin-</td></tr><tr><td>gle document summarization, they are mostly ex-</td></tr><tr><td>tractive, exhibiting a small degree of abstraction</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "text": "Automatic metric scores. bold: best. underlined: second best. * indicates there is no statistically significant difference from our system with p < 0.01 under a paired t-test." |
| }, |
| "TABREF5": { |
| "html": null, |
| "content": "<table><tr><td colspan=\"4\">: ESQE score. Compared with our system, all</td></tr><tr><td colspan=\"4\">other are statistically significant with p < 0.01 under a</td></tr><tr><td>paired t-test.</td><td/><td/><td/></tr><tr><td/><td colspan=\"3\">Overall Informative Fluent</td></tr><tr><td>Random</td><td>1.10 *</td><td>1.45</td><td>2.21</td></tr><tr><td>See et al. (2017)</td><td>1.45 *</td><td>1.98</td><td>1.61</td></tr><tr><td>Our System</td><td>2.28</td><td>2.38</td><td>2.89</td></tr><tr><td>Original Subject</td><td>2.56</td><td>2.66</td><td>3.11</td></tr><tr><td colspan=\"2\">Human Annotation 2.74 *</td><td>3.07</td><td>2.94</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "text": "" |
| }, |
| "TABREF6": { |
| "html": null, |
| "content": "<table><tr><td>indicates the differ-</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "text": "Human evaluation." |
| }, |
| "TABREF7": { |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null, |
| "text": "Case study. The sentences extracted by our model are underlined. (a)(b)(c): Our model can generate effective subjects by extracting and rewriting multiple sentences containing salient information. (d): Our model fails to generate reasonable subjects for the novel topic of \"farewell\" which is not seen during training." |
| } |
| } |
| } |
| } |