ACL-OCL / Base_JSON /prefixR /json /R19 /R19-1018.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:03:32.109955Z"
},
"title": "Learning Sentence Embeddings for Coherence Modelling and Beyond",
"authors": [
{
"first": "Tanner",
"middle": [],
"last": "Bohn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Western University",
"location": {
"settlement": "London",
"region": "ON",
"country": "Canada"
}
},
"email": "tbohn@uwo.ca"
},
{
"first": "Yining",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Western University",
"location": {
"settlement": "London",
"region": "ON",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Jinhang",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Western University",
"location": {
"settlement": "London",
"region": "ON",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Charles",
"middle": [
"X"
],
"last": "Ling",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Western University",
"location": {
"settlement": "London",
"region": "ON",
"country": "Canada"
}
},
"email": "charles.ling@uwo.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a novel and effective technique for performing text coherence tasks while facilitating deeper insights into the data. Despite obtaining ever-increasing task performance, modern deep-learning approaches to NLP tasks often only provide users with the final network decision and no additional understanding of the data. In this work, we show that a new type of sentence embedding learned through self-supervision can be applied effectively to text coherence tasks while serving as a window through which deeper understanding of the data can be obtained. To produce these sentence embeddings, we train a recurrent neural network to take individual sentences and predict their location in a document in the form of a distribution over locations. We demonstrate that these embeddings, combined with simple visual heuristics, can be used to achieve performance competitive with state-of-the-art on multiple text coherence tasks, outperforming more complex and specialized approaches. Additionally, we demonstrate that these embeddings can provide insights useful to writers for improving writing quality and informing document structuring, and assisting readers in summarizing and locating information.",
"pdf_parse": {
"paper_id": "R19-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a novel and effective technique for performing text coherence tasks while facilitating deeper insights into the data. Despite obtaining ever-increasing task performance, modern deep-learning approaches to NLP tasks often only provide users with the final network decision and no additional understanding of the data. In this work, we show that a new type of sentence embedding learned through self-supervision can be applied effectively to text coherence tasks while serving as a window through which deeper understanding of the data can be obtained. To produce these sentence embeddings, we train a recurrent neural network to take individual sentences and predict their location in a document in the form of a distribution over locations. We demonstrate that these embeddings, combined with simple visual heuristics, can be used to achieve performance competitive with state-of-the-art on multiple text coherence tasks, outperforming more complex and specialized approaches. Additionally, we demonstrate that these embeddings can provide insights useful to writers for improving writing quality and informing document structuring, and assisting readers in summarizing and locating information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A goal of much of NLP research is to create tools that not only assist in completing tasks, but help gain insights into the text being analyzed. This is especially true of text coherence tasks, as users are likely to wonder where efforts should be focused Figure 1 : This paper abstract is analyzed by our sentence position model trained on academic abstracts. The sentence encodings (predicted position distributions) are shown below each sentence, where white is low probability and red is high. Position quantiles are ordered from left to right. The first sentence, for example, is typical of the first sentence of abstracts as reflected in the high firstquantile value. For two text coherence tasks, we show the how the sentence encodings can easily be used to solve them. The black dots indicate the weighted average predicted position for each sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 264,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "to improve writing or understand how text should be reorganized for improved coherence. By improving coherence, a text becomes easier to read and understand (Lapata and Barzilay, 2005) , and in this work we particularly focus on measuring coherence in terms of sentence ordering.",
"cite_spans": [
{
"start": 157,
"end": 184,
"text": "(Lapata and Barzilay, 2005)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many recent approaches to NLP tasks make use of end-to-end neural approaches which exhibit ever-increasing performance, but provide little value to end-users beyond a classification or regression value (Gong et al., 2016; Logeswaran et al., 2018; Cui et al., 2018) . This leaves open the question of whether we can achieve good performance on NLP tasks while simultaneously providing users with easily obtainable insights into the data. This is precisely what the work in this paper aims to do in the context of coherence analysis, by providing a tool with which users can quickly and visually gain insight into structural information about a text. To accomplish this, we rely on the surprising importance of sentence location in many areas of natural language processing. If a sentence does not appear to belong where it is located, it decreases the coherence and readability of the text (Lapata and Barzilay, 2005) . If a sentence is located at the beginning of a document or news article, it is very likely to be a part of a high quality extractive summary . The location of a sentence in a scientific abstract is also an informative indicator of its rhetorical purpose (Teufel et al., 1999) . It thus follows that the knowledge of where a sentence should be located in a text is valuable.",
"cite_spans": [
{
"start": 202,
"end": 221,
"text": "(Gong et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 222,
"end": 246,
"text": "Logeswaran et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 247,
"end": 264,
"text": "Cui et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 889,
"end": 916,
"text": "(Lapata and Barzilay, 2005)",
"ref_id": "BIBREF24"
},
{
"start": 1173,
"end": 1194,
"text": "(Teufel et al., 1999)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Tasks requiring knowledge of sentence position -both relative to neighboring sentences and globally -appear in text coherence modelling, with two important tasks being order discrimination (is a sequence of sentences in the correct order?) and sentence ordering (re-order a set of unordered sentences). Traditional methods in this area make use of manual feature engineering and established theory behind coherence (Lapata and Barzilay, 2005; Barzilay and Lapata, 2008; Grosz et al., 1995) . Modern deep-learning based approaches to these tasks tend to revolve around taking raw words and directly predicting local (Li and Hovy, 2014; or global (Cui et al., 2017; Li and Jurafsky, 2017) coherence scores or directly output a coherent sentence ordering (Gong et al., 2016; Logeswaran et al., 2018; Cui et al., 2018) . While new deep-learning based approaches in text coherence continue to achieve ever-increasing performance, their value in real-world applications is undermined by the lack of actionable insights made available to users.",
"cite_spans": [
{
"start": 415,
"end": 442,
"text": "(Lapata and Barzilay, 2005;",
"ref_id": "BIBREF24"
},
{
"start": 443,
"end": 469,
"text": "Barzilay and Lapata, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 470,
"end": 489,
"text": "Grosz et al., 1995)",
"ref_id": "BIBREF19"
},
{
"start": 615,
"end": 634,
"text": "(Li and Hovy, 2014;",
"ref_id": "BIBREF25"
},
{
"start": 645,
"end": 663,
"text": "(Cui et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 664,
"end": 686,
"text": "Li and Jurafsky, 2017)",
"ref_id": "BIBREF26"
},
{
"start": 752,
"end": 771,
"text": "(Gong et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 772,
"end": 796,
"text": "Logeswaran et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 797,
"end": 814,
"text": "Cui et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we introduce a self-supervised approach for learning sentence embeddings which can be used effectively for text coherence tasks (Section 3) while also facilitating deeper understanding of the data (Section 4). Figure 1 provides a taste of this, displaying the sentence embeddings for the abstract of this paper. The self-supervision task we employ is that of predicting the location of a sentence in a document given only the raw text. By training a neural network on this task, it is forced to learn how the location of a sentence in a structured text is related to its syntax and semantics. As a neural model, we use a bidirectional recurrent neural network, and train it to take sentences and predict a discrete distribution over possible locations in the source text. We demonstrate the effectiveness of predicted position distributions as an accurate way to assess document coherence by performing order discrimination and sentence reordering of scientific abstracts. We also demonstrate a few types of insights that these embeddings make available to users that the predicted location of a sentence in a news article can be used to formulate an effective heuristic for extractive document summarization -outperforming existing heuristic methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 233,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The primary contributions of this work are thus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We propose a novel self-supervised approach to learn sentence embeddings which works by learning to map sentences to a distribution over positions in a document (Section 2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We describe how these sentence embeddings can be applied to established coherence tasks using simple algorithms amenable to visual approximation (Section 2.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. We demonstrate that these embeddings are competitive at solving text coherence tasks (Section 3) while quickly providing access to further insights into texts (Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "By training a machine learning model to predict the location of a sentence in a body of text (conditioned upon features not trivially indicative of position), we obtain a sentence position model such that sentences predicted to be at a particular location possess properties typical of sentences found at that position. For example, if a sentence is predicted to be at the beginning of a news article, it should resemble an introductory sentence. In the remainder of this section we describe our neural sentence position model and then discuss how it can be applied to text coherence tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicted Position Distributions 2.1 Overview",
"sec_num": "2"
},
{
"text": "The purpose of the position model is to produce sentence embeddings by predicting the position in Sentences from a text are individually fed into the model to produce a PPD sequence. In this diagram we see a word sequence of length three fed into the model, which will output a single row in the PPD sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Position Model",
"sec_num": "2.2"
},
{
"text": "a text of a given sentence. Training this model requires no manual labeling, needing only samples of text from the target domain. By discovering patterns in this data, the model produces sentence embeddings suitable for a variety of coherencerelated NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Position Model",
"sec_num": "2.2"
},
{
"text": "To implement the position model, we use stacked bi-directional LSTMs (Schuster and Paliwal, 1997) followed by a softmax output layer. Instead of predicting a single continuous value for the position of a sentence as the fraction of the way through a document, we frame sentence position prediction as a classification problem. Framing the position prediction task as classification was initially motivated by the poor performance of regression models; since the task of position prediction is quite difficult, we observed that regression models would consistently make predictions very close to 0.5 (middle of the document), thus not providing much useful information. To convert the task to a classification prob-lem, we aim to determine what quantile of the document a sentence resides in. Notationally, we will refer to the number of quantiles as Q. We can interpret the class probabilities behind a prediction as a discrete distribution over positions for a sentence, providing us with a predicted position distribution (PPD). When Q = 2 for example, we are predicting whether a sentence is in the first or last half of a document. When Q = 4, we are predicting which quarter of the document it is in. In Figure 2 is a visualization of the neural architecture which produces PPDs of Q = 10.",
"cite_spans": [
{
"start": 69,
"end": 97,
"text": "(Schuster and Paliwal, 1997)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 1209,
"end": 1217,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "2.2.1"
},
{
"text": "The sentence position model receives an input sentence as a sequence of word encodings and outputs a single vector of dimension Q. Sentences are fed into the BiLSTM one at a time as a sequence of word encodings, where the encoding for each word consists of the concatenation of: (1) a pretrained word embedding, (2) the average of the pretrained word embedding for the entire document (which is constant for all words in a document), and (3) the difference of the first two components (although this information is learnable given the first two components, we found during early experimentation that it confers a small performance improvement). In addition to our own observations, the document-wide average component was also shown in (Logeswaran et al., 2018) to improve performance at sentence ordering, a task similar to sentence location prediction. For the pretrained word embeddings, we use 300 dimensional fastText embeddings 1 , shown to have excellent cross-task performance (Joulin et al., 2016) . In Figure 2 , the notation f txt(token) represents converting a textual token (word or document) to its fastText embedding. The embedding for a document is the average of the embeddings for all words in it.",
"cite_spans": [
{
"start": 736,
"end": 761,
"text": "(Logeswaran et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 985,
"end": 1006,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 1012,
"end": 1020,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Features Used",
"sec_num": "2.2.2"
},
{
"text": "The features composing the sentence embeddings fed into the position model must be chosen carefully so that the order of the sentences does not directly affect the embeddings (i.e. the sentence embeddings should be the same whether the sentence ordering is permuted or not). This is because we want the predicted sentence positions to be independent of the true sentence position, and not every sentence embedding technique provides this. As a simple example, if we include the true location of a sentence in a text as a feature when training the position model, then instead of learning the connection between sentence meaning and position, the mapping would trivially exploit the known sentence position to perfectly predict the sentence quantile position. This would not allow us to observe where the sentence seems it should be located.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features Used",
"sec_num": "2.2.2"
},
{
"text": "For the tasks of both sentence ordering and calculating coherence, PPDs can be combined with simple visually intuitive heuristics, as demonstrated in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Application to Coherence Tasks",
"sec_num": "2.3"
},
{
"text": "Calculate weighted average predicted sentence quantiles",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application to Coherence Tasks",
"sec_num": "2.3"
},
{
"text": "Sentences 1, 2, and 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": null
},
{
"text": "Extract sentences with highest Q1 probability Reordered Sentences [1, 2, 7, 6, 5, 3, 4, 8, 7, 9] Kendall's tau Coherence Score 0.5",
"cite_spans": [
{
"start": 66,
"end": 69,
"text": "[1,",
"ref_id": null
},
{
"start": 70,
"end": 72,
"text": "2,",
"ref_id": null
},
{
"start": 73,
"end": 75,
"text": "7,",
"ref_id": null
},
{
"start": 76,
"end": 78,
"text": "6,",
"ref_id": null
},
{
"start": 79,
"end": 81,
"text": "5,",
"ref_id": null
},
{
"start": 82,
"end": 84,
"text": "3,",
"ref_id": null
},
{
"start": 85,
"end": 87,
"text": "4,",
"ref_id": null
},
{
"start": 88,
"end": 90,
"text": "8,",
"ref_id": null
},
{
"start": 91,
"end": 93,
"text": "7,",
"ref_id": null
},
{
"start": 94,
"end": 96,
"text": "9]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": null
},
{
"text": "Islamabad , pakistani --a 9 -month -old pakistani boy bawled as he was fingerprinted and booked in lahore on an attempted murder charge after his family members allegedly threw bricks at police trying to collect an unpaid bill. The ordeal started february 1 when several police officers and a bailiff went to a home hoping to get payment for a gas bill , said butt , a senior police official in lahore. A scuffle ensued , during which the infant 's father , one of his teenage sons and others in t...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Original Text (news article)",
"sec_num": null
},
{
"text": "Figure 3: A visualization of our NLP algorithms utilizing PPDs applied to a news article. To reorder sentences, we calculate average weighted positions (identified with black circles) to induce an ordering. Coherence is calculated with the Kendall's rank correlation coefficient between the true and induced ranking. We also show how PPDs can be used to perform summarization, as we will explore further in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Induce ranking with weighted average predicted positions",
"sec_num": null
},
{
"text": "To induce a new ordering on a sequence of sentences, S, we simply sort the sentence by their weighted average predicted quantile,Q(s \u2208 S), defined by:Q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ordering",
"sec_num": "2.3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(s) = Q i=1 i \u00d7 P P D(s) i ,",
"eq_num": "(1)"
}
],
"section": "Sentence Ordering",
"sec_num": "2.3.1"
},
{
"text": "where P P D(s) is the Q-dimensional predicted position distribution/sentence embedding for the sentence s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Ordering",
"sec_num": "2.3.1"
},
{
"text": "To calculate the coherence of a text, we employ the following simple algorithm on top of the PPDs: use the Kendall's tau coefficient between the sentence ordering induced by the weighted average predicted sentence positions and the true sentence positions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating coherence",
"sec_num": "2.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "coh = \u03c4 ((Q(s), for s = S 1 , ..., S |S| ), (1, ..., |S|)).",
"eq_num": "(2)"
}
],
"section": "Calculating coherence",
"sec_num": "2.3.2"
},
{
"text": "3 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating coherence",
"sec_num": "2.3.2"
},
{
"text": "In this section, we evaluate our PPD-based approaches on two coherence tasks and demonstrate that only minimal performance is given up by our approach to providing more insightful sentence embeddings. (Chollet et al., 2015) .",
"cite_spans": [
{
"start": 201,
"end": 223,
"text": "(Chollet et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating coherence",
"sec_num": "2.3.2"
},
{
"text": "Order discrimination setup. For order discrimination, we use the Accidents and Earthquakes datasets from (Barzilay and Lapata, 2008) which consists of aviation accident reports and news articles related to earthquakes respectively. The task is to determine which of a permuted Table 2 : Results on the order discrimination and sentence reordering coherence tasks. Our approach trades only a small decrease in performance for improved utility of the sentence embeddings over other approaches, achieving close to or the same as the state-of-the-art.",
"cite_spans": [
{
"start": 105,
"end": 132,
"text": "(Barzilay and Lapata, 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Calculating coherence",
"sec_num": "2.3.2"
},
{
"text": "ordering of the sentences and the original ordering is the most coherent (in the original order), for twenty such permutations. Since these datasets only contain training and testing partitions, we follow (Li and Hovy, 2014) and perform 10-fold cross-validation for hyperparameter tuning. Performance is measured with the accuracy with which the permuted sentences are identified. For example, the Entity Grid baseline in Table 2 gets 90.4% accuracy because given a shuffled report and original report, it correctly classifies them 90.4% of the time. Sentence ordering setup. For sentence ordering, we use past NeurIPS abstracts to compare with previous works. While our validation and test partitions are nearly identical to those from (Logeswaran et al., 2018), we use a publicly available dataset 2 which is missing the years 2005, 2006, and 2007 from the training set ( (Logeswaran et al., 2018) collected data from 2005 -2013). Abstracts from 2014 are used for validation, and 2015 is used for testing. To measure performance, we report both reordered sentence position accuracy as well as Kendall's rank correlation coefficient. For example, the Random baseline correctly predicts the index of sentences 15.6% of the time, but there is no correlation between the predicted ordering and true ordering, so \u03c4 = 0.",
"cite_spans": [
{
"start": 205,
"end": 224,
"text": "(Li and Hovy, 2014)",
"ref_id": "BIBREF25"
},
{
"start": 874,
"end": 899,
"text": "(Logeswaran et al., 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 422,
"end": 429,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Calculating coherence",
"sec_num": "2.3.2"
},
{
"text": "Training and tuning. Hyperparameter tuning for both tasks is done with a random search, choosing the hyperparameter set with the best validation score averaged across the 10 folds for or-der discrimination dataset and for three trials for the sentence reordering task. The final hyperparameters chosen are in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Calculating coherence",
"sec_num": "2.3.2"
},
{
"text": "Baselines. We compare our results against a random baseline, the traditional Entity Grid approach from (Barzilay and Lapata, 2008) , Window network (Li and Hovy, 2014) , LSTM+PtrNet (Gong et al., 2016) , RNN Decoder and Varient-LSTM+PtrNet from (Logeswaran et al., 2018) , and the most recent state-of-the art ATTOrderNet (Cui et al., 2018) .",
"cite_spans": [
{
"start": 103,
"end": 130,
"text": "(Barzilay and Lapata, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 148,
"end": 167,
"text": "(Li and Hovy, 2014)",
"ref_id": "BIBREF25"
},
{
"start": 182,
"end": 201,
"text": "(Gong et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 245,
"end": 270,
"text": "(Logeswaran et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 322,
"end": 340,
"text": "(Cui et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating coherence",
"sec_num": "2.3.2"
},
{
"text": "Results. Results for both coherence tasks are collected in Table 2 . For the order discrimination task, we find that on both datasets, our PPD-based approach only slightly underperforms ATTOrder-Net (Cui et al., 2018) , with performance similar to the LSTM+PtrNet approaches (Gong et al., 2016; Logeswaran et al., 2018) . On the more difficult sentence reordering task, our approach exhibits performance closer to the state-of-the-art, achieving the same ranking correlation and only slightly lower positional accuracy. Given that the publicly available training set for the reordering task is slightly smaller than that used in previous work, it is possible that more data would allow our approach to achieve even better performance. In the next section we will discuss the real-world value offered by our approach that is largely missing from existing approaches.",
"cite_spans": [
{
"start": 199,
"end": 217,
"text": "(Cui et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 275,
"end": 294,
"text": "(Gong et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 295,
"end": 319,
"text": "Logeswaran et al., 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Calculating coherence",
"sec_num": "2.3.2"
},
{
"text": "A primary benefit of applying PPDs to coherencerelated tasks is the ability to gain deeper insights into the data. In this section, we will demon- Figure 4 :",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 155,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Actionable Insights",
"sec_num": "4"
},
{
"text": "The PPDs for a CNN article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Actionable Insights",
"sec_num": "4"
},
{
"text": "(full text available at http://web. archive.org/web/20150801040019id_/http://www.cnn.com/2015/03/13/us/ tulane-bacteria-exposure/). The dashed line shows the weighted average predicted sentence positions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Actionable Insights",
"sec_num": "4"
},
{
"text": "strate the following in particular: (1) how PPDs can quickly be used to understand how the coherence of a text may be improved, (2) how the existence of multiple coherence subsections may be identified, and (3) how PPDs can allow users to locate specific types of information without reading a single word, a specific case of which is extractive summarization. For demonstrations, we will use the news article presented in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 423,
"end": 431,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Actionable Insights",
"sec_num": "4"
},
{
"text": "For a writer to improve their work, understanding the incoherence present is important. Observing the PPD sequence for the article in Figure 4 makes it easy to spot areas of potential incoherence: they occur where consecutive PPDs are significantly different (from sentences 1 to 2, 6 to 7, and 10 to 11). In this case, the writer may determine that sentence 2 is perhaps not as introductory as it should be. The predicted incoherence between sentences 10 and 11 is more interesting, and as we will see next, the writer may realize that this incoherence may be okay to retain.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Improving Coherence",
"sec_num": "4.1"
},
{
"text": "In Figure 4 , we see rough progressions of introductory-type sentences to conclusory-type sentences between sentences 1 and 10 and sentences 11 and 15. This may indicate that the article is actually composed of two coherent subsections, which means that the incoherence between sentences 10 and 11 is expected and natural. By being able to understand where subsections may occur in a document, a writer can make informed decisions on where to split a long text into more coherent chunks or paragraphs. Knowing where approximate borders between ideas in a document exist may also help readers skim the document to find desired information more quickly, as further discussed in the next subsection.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Identifying Subsections",
"sec_num": "4.2"
},
{
"text": "When reading a new article, readers well-versed in the subject of the article may want to skip highlevel introductory comments and jump straight to the details. For those unfamiliar with the content or triaging many articles, this introductory information is important to determine the subject matter. Using PPDs, locating these types of information quickly should be easy for readers, even when the document has multiple potential subsections. In Figure 4 , sentences 1 and 11 likely contain introductory information (since the probability of occurring in the first quantiles is highest), the most conclusory-type information is in sentence 10, and lower-level details are likely spread among the remaining sentences. Locating sentences with the high-level details of a document is reminiscent of the task of extractive summarization, where significant research has been performed (Nenkova et al., 2011; Nenkova and McKeown, 2012) . It is thus natural to ask how well a simple PPD-based approach performs Table 3 : ROUGE scores on the CNN/DailyMail summarization task. Our PPD-based heuristic outperforms the suite of established heuristic summarizers. However, the higher performance of the deeplearning models demonstrates that training explicitly for summarization is beneficial. at summarization. To answer this question, the summarization algorithm we will use is: select the n sentences with the highest P P D(s \u2208 S) 0 value, where S is the article being extractively summarized down to n sentences. For the article in Figure 4, sentences 1, 11 , and 3 would be chosen since they have the highest first-quantile probabilities. This heuristic is conceptually similar to the Lead heuristic, where sentences that actually occur at the start of the document are chosen to be in the summary. Despite its simplicity, the Lead heuristic often achieves near state-of-the-art results .",
"cite_spans": [
{
"start": 882,
"end": 904,
"text": "(Nenkova et al., 2011;",
"ref_id": "BIBREF33"
},
{
"start": 905,
"end": 931,
"text": "Nenkova and McKeown, 2012)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 448,
"end": 456,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1006,
"end": 1013,
"text": "Table 3",
"ref_id": null
},
{
"start": 1526,
"end": 1551,
"text": "Figure 4, sentences 1, 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Locating Information and Summarization",
"sec_num": "4.3"
},
{
"text": "We experiment on the non-anonymized CNN/DailyMail dataset (Hermann et al., 2015) and evaluate with full-length ROUGE-1, -2, and -L F1 scores (Lin and Hovy, 2003) . For the neural position model, we choose four promising sets of hyperparameters identified during the hyperparameter search for the sentence ordering task in Section 3 and train each sentence position model on 10K of the 277K training articles (which provides our sentence position model with over 270K sentences to train on). Test results are reported for the model with the highest validation score. The final hyperparameters chosen for this sentence location model are: Q = 10, epochs = 10, layer dropouts = (0.4, 0.2), layer widths = (512, 64).",
"cite_spans": [
{
"start": 58,
"end": 80,
"text": "(Hermann et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 141,
"end": 161,
"text": "(Lin and Hovy, 2003)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Locating Information and Summarization",
"sec_num": "4.3"
},
{
"text": "We compare our PPD-based approach to other heuristic approaches 3 . For completeness, we also include results of deep-learning based approaches and their associated Lead baselines eval-uated using full-length ROUGE scores on the nonanonymized CNN/DailyMail dataset. Table 3 contains the the comparison between our PPD-based summarizer and several established heuristic summarizers. We observe that our model has ROUGE scores superior to the other heuristic approaches by a margin of approximately 2 points for ROUGE-1 and -L and 1 point for ROUGE-2. In contrast, the deep-learning approaches trained explicitly for summarization achieve even higher scores, suggesting that there is more to a good summary than the sentences simply being introductory-like.",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Locating Information and Summarization",
"sec_num": "4.3"
},
{
"text": "Extensive research has been done on text coherence, motivated by downstream utility of coherence models. In addition to the applications we demonstrate in Section 4, established applications include determining the readability of a text (coherent texts are easier to read) (Barzilay and Lapata, 2008) , refinement of multi-document summaries (Barzilay and Elhadad, 2002) , and essay scoring (Farag et al., 2018) .",
"cite_spans": [
{
"start": 273,
"end": 300,
"text": "(Barzilay and Lapata, 2008)",
"ref_id": "BIBREF10"
},
{
"start": 342,
"end": 370,
"text": "(Barzilay and Elhadad, 2002)",
"ref_id": "BIBREF9"
},
{
"start": 391,
"end": 411,
"text": "(Farag et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Traditional methods to coherence modelling utilize established theory and handcrafted linguistic features (Grosz et al., 1995; Lapata, 2003) . The Entity Grid model (Lapata and Barzilay, 2005; Barzilay and Lapata, 2008) is an influential traditional approach which works by first constructing a sentence \u00d7 discourse entities (noun phrases) occurrence matrix, keeping track of the syntactic role of each entity in each sentence. Sentence transition probabilities are then calculated using this representation and used as a feature vector as in-put to a SVM classifier trained to rank sentences on coherence.",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Grosz et al., 1995;",
"ref_id": "BIBREF19"
},
{
"start": 127,
"end": 140,
"text": "Lapata, 2003)",
"ref_id": "BIBREF23"
},
{
"start": 165,
"end": 192,
"text": "(Lapata and Barzilay, 2005;",
"ref_id": "BIBREF24"
},
{
"start": 193,
"end": 219,
"text": "Barzilay and Lapata, 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Newer methods utilizing neural networks and deep learning can be grouped together by whether they indirectly or directly produce an ordering given an unordered set of sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Indirect ordering. Approaches in the indirect case include Window network (Li and Hovy, 2014) , Pairwise Ranking Model , the deep coherence model from (Cui et al., 2017) , and the discriminative model from (Li and Jurafsky, 2017) . These approaches are trained to take a set of sentences (anywhere from two or three (Li and Hovy, 2014) to the whole text (Cui et al., 2017; Li and Jurafsky, 2017) ) and predict whether the component sentences are already in a coherent order. A final ordering of sentences is constructed by maximizing coherence of sentence subsequences.",
"cite_spans": [
{
"start": 74,
"end": 93,
"text": "(Li and Hovy, 2014)",
"ref_id": "BIBREF25"
},
{
"start": 151,
"end": 169,
"text": "(Cui et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 206,
"end": 229,
"text": "(Li and Jurafsky, 2017)",
"ref_id": "BIBREF26"
},
{
"start": 316,
"end": 335,
"text": "(Li and Hovy, 2014)",
"ref_id": "BIBREF25"
},
{
"start": 354,
"end": 372,
"text": "(Cui et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 373,
"end": 395,
"text": "Li and Jurafsky, 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Direct ordering. Approaches in the direct case include (Gong et al., 2016; Logeswaran et al., 2018; Cui et al., 2018) . These model are trained to take a set of sentences, encode them using some technique, and with a recurrent neural network decoder, output the order in which the sentences would coherently occur.",
"cite_spans": [
{
"start": 55,
"end": 74,
"text": "(Gong et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 75,
"end": 99,
"text": "Logeswaran et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 100,
"end": 117,
"text": "Cui et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Models in these two groups all use similar highlevel architectures: a recurrent or convolutional sentence encoder, an optional paragraph encoder, and then either predicting coherence from that encoding or iteratively reconstructing the ordering of the sentences. The PPD-based approaches described in Section 2 take a novel route of directly predicting location information of each sentence. Our approaches are thus similar to the direct approaches in that position information is directly obtained (here, in the PPDs), however the position information produced by our model is much more rich than simply the index of the sentence in the new ordering. With the set of indirect ordering approaches, our model approach to coherence modelling shares the property that induction of an ordering upon the sentences is only done after examining all of the sentence embeddings and explicitly arranging them in the most coherent fashion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The ability to facilitate deeper understanding of texts is an important, but recently ignored, property for coherence modelling approaches. In an effort to improve this situation, we present a selfsupervised approach to learning sentence embeddings, which we call PPDs, that rely on the connection between the meaning of a sentence and its location in a text. We implement the new sentence embedding technique with a recurrent neural network trained to map a sentence to a discrete distribution indicating where in the text the sentence is likely located. These PPDs have the useful property that a high probability in a given quantile indicates that the sentence is typical of sentences that would occur at the corresponding location in the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We demonstrate how these PPDs can be applied to coherence tasks with algorithms simple enough such that they can be visually performed by users while achieving near state-of-the-art, outperforming more complex and specialized systems. We also demonstrate how PPDs can be used to obtain various insights into data, including how to go about improving the writing, how to identify potential subsections, and how to locate specific types of information, such as introductory or summary information. As a proof-of-concept, we additionally show that despite PPDs not being designed for the task, they can be used to create a heuristic summarizer which outperforms comparable heuristic summarizers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In future work, it would be valuable to evaluate our approach on texts from a wider array of domains and with different sources of incoherence. In particular, examining raw texts identified by humans as lacking coherence could be performed, to determine how well our model correlates with human judgment. Exploring how the algorithms utilizing PPDs may be refined for improved performance on the wide variety of coherence-related tasks may also prove fruitful. We are also interested in examining how PPDs may assist with other NLP tasks such as text generation or author identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Available online at https://fasttext.cc/ docs/en/english-vectors.html.We used the wiki-news-300d-1M vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.kaggle.com/benhamner/ nips-papers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Implementations provided by Sumy library, available at https://pypi.python.org/pypi/sumy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through the Discovery Grants Program. NSERC invests annually over $1 billion in people, discovery and innovation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Model (lead baseline source) ROUGE-1 ROUGE-2 ROUGE-L Lead-3",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Model (lead baseline source) ROUGE-1 ROUGE-2 ROUGE-L Lead-3 (Nallapati et al., 2017) 39.2 15.7 35.5",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Inferring strategies for sentence ordering in multidocument news summarization",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "Noemie",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References Regina Barzilay and Noemie Elhadad. 2002. Infer- ring strategies for sentence ordering in multidocu- ment news summarization. Journal of Artificial In- telligence Research .",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Modeling local coherence: An entity-based approach",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "1",
"pages": "1--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Compu- tational Linguistics 34(1):1-34.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural sentence ordering",
"authors": [
{
"first": "Xinchi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.06952"
]
},
"num": null,
"urls": [],
"raw_text": "Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2016. Neural sentence ordering. arXiv preprint arXiv:1607.06952 .",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep attentive sentence ordering network",
"authors": [
{
"first": "Baiyun",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Yingming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhongfei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baiyun Cui, Yingming Li, Ming Chen, and Zhongfei Zhang. 2018. Deep attentive sentence order- ing network. In Proceedings of the 2018",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Conference on Empirical Methods in Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "4340--4349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference on Empirical Methods in Nat- ural Language Processing. Association for Computational Linguistics, pages 4340-4349. http://aclweb.org/anthology/D18-1465.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Text coherence analysis based on deep neural network",
"authors": [
{
"first": "Baiyun",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Yingming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yaqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongfei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "2027--2030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baiyun Cui, Yingming Li, Yaqing Zhang, and Zhongfei Zhang. 2017. Text coherence analysis based on deep neural network. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Man- agement. ACM, pages 2027-2030.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Lexrank: Graph-based lexical centrality as salience in text summarization",
"authors": [
{
"first": "G\u00fcnes",
"middle": [],
"last": "Erkan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Artificial Intelligence Research",
"volume": "22",
"issue": "",
"pages": "457--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00fcnes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research 22:457-479.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural automated essay scoring and coherence modeling for adversarially crafted input",
"authors": [
{
"first": "Youmna",
"middle": [],
"last": "Farag",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.06898"
]
},
"num": null,
"urls": [],
"raw_text": "Youmna Farag, Helen Yannakoudakis, and Ted Briscoe. 2018. Neural automated essay scoring and coherence modeling for adversarially crafted input. arXiv preprint arXiv:1804.06898 .",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "End-to-end neural sentence ordering using pointer network",
"authors": [
{
"first": "Jingjing",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Xinchi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.04953"
]
},
"num": null,
"urls": [],
"raw_text": "Jingjing Gong, Xinchi Chen, Xipeng Qiu, and Xu- anjing Huang. 2016. End-to-end neural sentence ordering using pointer network. arXiv preprint arXiv:1611.04953 .",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Centering: A framework for modeling the local coherence of discourse",
"authors": [
{
"first": "J",
"middle": [],
"last": "Barbara",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Grosz",
"suffix": ""
},
{
"first": "Aravind K",
"middle": [],
"last": "Weinstein",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "21",
"issue": "",
"pages": "203--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara J Grosz, Scott Weinstein, and Aravind K Joshi. 1995. Centering: A framework for modeling the lo- cal coherence of discourse. Computational linguis- tics 21(2):203-225.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems. pages 1693- 1701.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.01759"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759 .",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Probabilistic text structuring: Experiments with sentence ordering",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "545--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proceed- ings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1. Associa- tion for Computational Linguistics, pages 545-552.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Automatic evaluation of text coherence: Models and representations",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2005,
"venue": "IJCAI",
"volume": "5",
"issue": "",
"pages": "1085--1090",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: Models and represen- tations. In IJCAI. volume 5, pages 1085-1090.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A model of coherence based on distributed sentence representation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2039--2048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Eduard Hovy. 2014. A model of co- herence based on distributed sentence representa- tion. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 2039-2048.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Neural net models of open-domain discourse coherence",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "198--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Dan Jurafsky. 2017. Neural net models of open-domain discourse coherence. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 198-209.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Automatic evaluation of summaries using n-gram cooccurrence statistics",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "71--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Eduard Hovy. 2003. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Hu- man Language Technology-Volume 1. Association for Computational Linguistics, pages 71-78.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Sentence ordering and coherence modeling using recurrent neural networks",
"authors": [
{
"first": "Lajanugen",
"middle": [],
"last": "Logeswaran",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lajanugen Logeswaran, Honglak Lee, and Dragomir Radev. 2018. Sentence ordering and coherence modeling using recurrent neural networks. In Thirty-Second AAAI Conference on Artificial Intel- ligence.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The automatic creation of literature abstracts",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Peter Luhn",
"suffix": ""
}
],
"year": 1958,
"venue": "IBM Journal of research and development",
"volume": "2",
"issue": "2",
"pages": "159--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Peter Luhn. 1958. The automatic creation of lit- erature abstracts. IBM Journal of research and de- velopment 2(2):159-165.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Textrank: Bringing order into text",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proceedings of the 2004 con- ference on empirical methods in natural language processing.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "3075--3081",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of docu- ments. In AAAI. pages 3075-3081.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A survey of text summarization techniques",
"authors": [
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2012,
"venue": "Mining text data",
"volume": "",
"issue": "",
"pages": "43--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ani Nenkova and Kathleen McKeown. 2012. A sur- vey of text summarization techniques. In Mining text data, Springer, pages 43-76.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Automatic summarization",
"authors": [
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2011,
"venue": "Foundations and Trends R in Information Retrieval",
"volume": "5",
"issue": "2-3",
"pages": "103--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ani Nenkova, Kathleen McKeown, et al. 2011. Auto- matic summarization. Foundations and Trends R in Information Retrieval 5(2-3):103-233.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The impact of frequency on summarization. Microsoft Research",
"authors": [
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ani Nenkova and Lucy Vanderwende. 2005. The im- pact of frequency on summarization. Microsoft Re- search, Redmond, Washington, Tech. Rep. MSR-TR- 2005 101.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A deep reinforced model for abstractive summarization",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.04304"
]
},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. arXiv preprint arXiv:1705.04304 .",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kuldip",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673-2681.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 1073-1083.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Argumentative zoning: Information extraction from scientific text",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Teufel et al. 1999. Argumentative zoning: In- formation extraction from scientific text. Ph.D. the- sis, Citeseer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Illustration of the sentence position model, consisting of stacked BiLSTMs.",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "The neural sentence position model hyperparameters used in our coherence experiments. The following settings are used across all tasks: batch size of 32, sentence trimming/padding to a length of 25 words, the vocabulary is set to the 1000 most frequent words in the associated training set. The Adamax optimizer is used (Kingma and Ba, 2014) with default parameters supplied by Keras",
"content": "<table/>",
"num": null
}
}
}
}