ACL-OCL / Base_JSON /prefixD /json /D15 /D15-1014.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D15-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:26:31.905005Z"
},
"title": "Indicative Tweet Generation: An Extractive Summarization Problem?",
"authors": [
{
"first": "Priya",
"middle": [],
"last": "Sidhaye",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University Montreal",
"location": {
"postCode": "H3A 0E9",
"region": "QC",
"country": "Canada"
}
},
"email": "priya.sidhaye@mail.mcgill.ca"
},
{
"first": "Jackie",
"middle": [],
"last": "Chi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University Montreal",
"location": {
"postCode": "H3A 0E9",
"region": "QC, Canada"
}
},
"email": ""
},
{
"first": "Kit",
"middle": [],
"last": "Cheung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University Montreal",
"location": {
"postCode": "H3A 0E9",
"region": "QC, Canada"
}
},
"email": "jcheung@cs.mcgill.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Social media such as Twitter have become an important method of communication, with potential opportunities for NLG to facilitate the generation of social media content. We focus on the generation of indicative tweets that contain a link to an external web page. While it is natural and tempting to view the linked web page as the source text from which the tweet is generated in an extractive summarization setting, it is unclear to what extent actual indicative tweets behave like extractive summaries. We collect a corpus of indicative tweets with their associated articles and investigate to what extent they can be derived from the articles using extractive methods. We also consider the impact of the formality and genre of the article. Our results demonstrate the limits of viewing indicative tweet generation as extractive summarization, and point to the need for the development of a methodology for tweet generation that is sensitive to genre-specific issues.",
"pdf_parse": {
"paper_id": "D15-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "Social media such as Twitter have become an important method of communication, with potential opportunities for NLG to facilitate the generation of social media content. We focus on the generation of indicative tweets that contain a link to an external web page. While it is natural and tempting to view the linked web page as the source text from which the tweet is generated in an extractive summarization setting, it is unclear to what extent actual indicative tweets behave like extractive summaries. We collect a corpus of indicative tweets with their associated articles and investigate to what extent they can be derived from the articles using extractive methods. We also consider the impact of the formality and genre of the article. Our results demonstrate the limits of viewing indicative tweet generation as extractive summarization, and point to the need for the development of a methodology for tweet generation that is sensitive to genre-specific issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the rise in popularity of social media, message broadcasting sites such as Twitter and other microblogging services have become an important means of communication, with an estimated 500 million tweets being written every day 1 . In addition to individual users, various organizations and public figures such as newspapers, government officials and entertainers have established themselves on social media in order to disseminate information or promote their products.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While there has been recent progress in the development of Twitter-specific POS taggers, parsers, and other tweet understanding tools (Owoputi et al., 2013; Kong et al., 2014) , there has been little work on methods for generating tweets, despite the utility this would have for users and organizations.",
"cite_spans": [
{
"start": 134,
"end": 156,
"text": "(Owoputi et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 157,
"end": 175,
"text": "Kong et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we study the generation of the particular class of tweets that contain a link to an external web page that is composed primarily of text. Given the short length of a tweet, the presence of a URL in the tweet is a strong signal that the tweet is functioning to help Twitter users decide whether to read the full article. This class of tweets, which we call indicative tweets, represents a large subset of tweets overall, constituting more than half of the tweets in our data set. Indicative tweets would appear to be the easiest to handle using current methods in text summarization, because there is a clear source of input from which a tweet could be generated. In effect, the tweet would be acting as an indicative summary of the article it is being linked to, and it would seem that existing methods in summarization can be applied. It should be noted that a tweet being indicative does not preclude it from also providing a critical evaluation of the linked article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There has in fact been some work along these lines, within the framework of extractive summarization. Lofi and Krestel (2012) describe a system to generate tweets from local government records through keyphrase extraction. Lloret and Palomar (2013) compares various extractive summarization algorithms applied on Twitter data to generate tweets from documents.",
"cite_spans": [
{
"start": 102,
"end": 125,
"text": "Lofi and Krestel (2012)",
"ref_id": "BIBREF13"
},
{
"start": 223,
"end": 248,
"text": "Lloret and Palomar (2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lofi and Krestel do not provide a formal evaluation of their model, while Lloret and Palomar compared overlap between system-generated and usergenerated tweets using ROUGE (Lin, 2004) . Unfortunately, they also show that there is little correlation between ROUGE scores and the perceived quality of the tweets when rated by human users for indicativeness and interest. More scrutiny is required to determine whether the wholesale adoption of methods and evaluation schemes from extractive summarization is justified.",
"cite_spans": [
{
"start": 172,
"end": 183,
"text": "(Lin, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Beyond issues of evaluation measures, it is also unclear whether extraction is the strategy employed by human tweeters. One of the original motivations behind extractive summarization was the observation that human summary writers tended to extract snippets of key phrases from the source text (Mani, 2001) . And while it may be true that an automatic tweet generation system need not necessarily follow the same approach to writing as human tweeters, it is still necessary to know what proportion of tweets could be accounted for in an extractive summarization paradigm.",
"cite_spans": [
{
"start": 294,
"end": 306,
"text": "(Mani, 2001)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With indicative tweets, an additional issue arises in that the genre of the source text is not constrained; for example it may be a news article or an informal blog post. This may be vastly different from the desired formality of tweet itself, and thus, a genre-appropriate extract may not be available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We begin to address the above issues through a study that examines to what extent tweet generation can be viewed as an extractive summarization problem. We extracted a dataset of indicative tweets containing a link to an external article, including the documents linked to by the tweets. We used this data and applied unigram, bigram and LCS (longest common subsequence) matching techniques inspired by ROUGE to determine what proportion of tweets can be found in the linked article. Even with the permissive unigram match measure, we find that well under half of the tweet can be found in the linked article. We also use stylistic analysis on the articles to examine the role that genre differences between the source text and the target tweet play, and find that it is easier to extract tweets from more formal articles than less formal ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our results point to the need for the development of a methodology for indicative tweet generation, rather than to expropriate the extractive summarization paradigm that was developed mostly on news text. Such a methodology will ideally be sensitive to stylistic factors as well as the underlying intent of the tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been studies on a number of different issues related to Twitter data, including classifying tweets and sentiment analysis of tweets. Ghosh et al. (2011) classified the retweeting activity of users based on time intervals between retweets of a single user and frequency of retweets from unique users. 'Retweet' here means the occurrence of the same URL in a different tweet. The study was able to classify the retweeting as automatic or robotic retweeting, campaigns, news, blogs and so on, based on the time-interval and user-frequency distributions. In another study, Chen et al. (2012) were able to extract sentiment expressions from a corpus of tweets including both formal words and informal slang that bear sentiment.",
"cite_spans": [
{
"start": 144,
"end": 163,
"text": "Ghosh et al. (2011)",
"ref_id": "BIBREF6"
},
{
"start": 580,
"end": 598,
"text": "Chen et al. (2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "Other studies using Twitter data include O'Connor et al. 2010, who use topic summarization for a given search for better browsing. Chakrabarti and Punera (2011) generate an event summary by learning about the event using a Hidden Markov Model over the tweets describing it. Wang et al. (2014) generate a coherent event summary by treating summarization as an optimization problem for topic cohesion. Inouye and Kalita (2011) compare multiple summarization techniques to generate a summary of multipost blogs on Twitter. Wei and Gao (2014) use tweets to help in generating better summaries of news articles.",
"cite_spans": [
{
"start": 131,
"end": 160,
"text": "Chakrabarti and Punera (2011)",
"ref_id": "BIBREF2"
},
{
"start": 274,
"end": 292,
"text": "Wang et al. (2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "As described in Section 1, we analyze tweet generation using measures inspired by extractive summarization evaluation. There has been one study comparing different text summarization techniques for tweet generation by Lloret and Palomar (2013) . Summarization systems were used to generate sentences lesser than 140 characters in length by summarizing documents, which could then be taken to be tweets. The systemgenerated tweets were evaluated using ROUGE measures (Lin, 2004) . The ROUGE-1, ROUGE-2 and ROUGE-L measures were used, and a humanwritten reference tweet was taken to be the gold standard.",
"cite_spans": [
{
"start": 218,
"end": 243,
"text": "Lloret and Palomar (2013)",
"ref_id": "BIBREF12"
},
{
"start": 466,
"end": 477,
"text": "(Lin, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "The limits of extractive summarization have been studied by He et al. (2000) . They compare user preferences for various types of summaries of an audio-visual presentation. They demonstrate that the most preferred method of summarization is highlights and notes provided by the author, rather than transcripts or slides from the presentation. Conroy et al. (2006) computed an oracle ROUGE score to investigate the same issue of the limits of extraction for news text.",
"cite_spans": [
{
"start": 60,
"end": 76,
"text": "He et al. (2000)",
"ref_id": "BIBREF8"
},
{
"start": 343,
"end": 363,
"text": "Conroy et al. (2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "These studies show that extractive summarization algorithms may not generate good quality summaries despite giving high ROUGE evaluation scores. Cheung and Penn (2013) show that for the news genre, extractive summarization systems that are optimized for centrality-that is, getting the core parts of the text into the summarycannot perform well when compared to model summaries, since the model summaries are abstracted from the document to a large extent.",
"cite_spans": [
{
"start": 145,
"end": 167,
"text": "Cheung and Penn (2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work",
"sec_num": "2"
},
{
"text": "As mentioned earlier, there have been numerous studies that used data from the public Twitter feeds. However, none of the datasets in those studies focused on tweets and related articles linked to these tweets. The dataset of Lloret and Palomar (2013) is an exception, as it contains tweets and the news articles they link to, but it only contains 200 English tweet-article pairs. Wei and Gao (2014) also constructed a dataset that contains both tweets and articles linked through them, but this data only deals with news text, and does not contain the variety of topics we wanted in the data. We therefore chose to build our own dataset. This section describes extraction, cleaning and other preprocessing of the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Twitter for Data Extraction",
"sec_num": "3.1"
},
{
"text": "Data was extracted from Twitter using the Twitter REST API using 51 search terms, or hashtags. These hashtags were chosen from a range of topics including pop culture, international summit meetings discussing political issues, lawsuits and trials, social issues and health care issues. All these hashtags were trending (being tweeted about at a high rate) at the time of extraction of the data. To get a broader sample, the data was extracted over the course of 15 days in November, 2014, which gave us multiple news stories to choose from for the search terms. The search terms were chosen so that there would be broad representation in terms of various stylistic properties of text like formality, subjectivity, etc. For example, searches related to politics would be more formal, while those related to films would be informal, and would also have a lot more opinion pieces about them. A few examples of the search terms and their distribution in genre are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 969,
"end": 976,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Extracting Data",
"sec_num": "3.2"
},
{
"text": "We extracted about 30,000 tweets, of which more than half, or around 16,000, contained URLs to an external news article, photo on photo sharing sites, or video. The data from the tweets was cleaned by removing the tweets that were not in English as well as the retweets; i.e., re-publications of a tweet by a different user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Data",
"sec_num": "3.2"
},
{
"text": "We deduplicated the 16,000 extracted URLs into 6,003 unique addressed, then extracted and preprocessed their contents. The newspaper package 2 was used to extract article text and the title from the web page. Since we are interested in text articles that can serve as the source text for summarization algorithms, we needed to remove photos and video links such as those from Instagram and YouTube. To do so, we removed those links that contained fewer than a threshold of 150 words. After this preprocessing, the number of useful articles was reduced from 6003 to 3066. There were some further tweet-article pairs where the text of the tweets was identical, these were removed by further preprocessing and the number of unique tweet-article pairs came down to 2471.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Politics",
"sec_num": null
},
{
"text": "The final version of the data consists of tweets along with other information about the tweet, such as links to articles, hashtags, time of publication, etc. We also retain the linked article text and preprocessed it using the CoreNLP toolkit (Manning et al., 2014) . This includes the URL itself and the text extracted from the article, as well as some extracted information such as sentence boundaries, POS tags for tokens, parse trees and dependency trees. These annotations are used later during our analysis in Section 4. Table 2 shows an example of an entry in the dataset.",
"cite_spans": [
{
"start": 243,
"end": 265,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 527,
"end": 534,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Politics",
"sec_num": null
},
{
"text": "Tweet '#RiggsReport: #CA as the #Election-Night exception. Voters rewarded #GOP nationally, but not in the #GoldenState. http://t.co/K542wvSNVz' Title 'The Riggs Report: California as the Election Night exception' Text 'When the dust settled on Election Night last week...' Table 2 : Example of a tweet, title of the article and the text.",
"cite_spans": [],
"ref_spans": [
{
"start": 274,
"end": 281,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Politics",
"sec_num": null
},
{
"text": "We now describe the analyses we performed on the data. Our goal is to investigate what proportion of the indicative tweets that we extracted can be found in the articles that they link to, in order to determine whether indicative tweet generation can be viewed as an extractive summarization problem. Table 3 gives an example of data where the tweet that was shared about the article does not come directly from the article text, while Table 4 shows a tweet that was almost entirely extracted from the text of the article, but changed a little for the purpose of readability.",
"cite_spans": [],
"ref_spans": [
{
"start": 301,
"end": 308,
"text": "Table 3",
"ref_id": null
},
{
"start": 436,
"end": 443,
"text": "Table 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "Tweet Are #Airlines doing enough with #Ebola? http://t.co/XExWwxmjnk #travel Title Could shortsighted airline refund policies lead to an outbreak? Text The deadly Ebola virus has arrived in the United States just in time for the holiday travel season, carrying fear and uncertainty with it... Table 3 : Example of a tweet, title of the article and the text when tweet cannot be extracted from text.",
"cite_spans": [],
"ref_spans": [
{
"start": 293,
"end": 300,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "We first compute the proportion of tweets that can be recovered directly from the article in its entirety (Section 4.1). Then, we calculate the degree of overlap in terms of unigrams and bigrams between the tweet and the text of the document (Sections 4.2, 4.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "In addition, we consider locality within the article when computing the overlap. For the unigram analysis, we performed a variant of the analysis, in which we computed the overlap within threesentence windows in the source article (Section 4.4). We also compute the least common subsequences between the tweet and the document (Section 4.5). This was done to investigate whether sentence compression techniques could be applied to local context windows to generate the tweet. These calculations are analogous to the ROUGE-1, -2 and -L style calculations. These results give an indication of the degree to which the tweet is extracted from the document text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "For all these analyses, the stop words have been eliminated from the tweet as well as the document, so that only the informative words are taken into consideration. The comparisons were made without lemmatization or stemming, to adhere closely to existing work in extractive summarization, where the only modifications to the source text are removing discourse cue words or removing words by sentence compression techniques. The hashtags, references (@) and URLs from the tweets were all removed for analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "We first checked for a complete substring match of the tweet in the text. Out of the 2471 unique instances of tweet and article pairs, a complete match was found only 23 times. In 9 cases out of these, the tweet text matched the title of the article, which our preprocessing tool did not correctly separate from the body of the article. In the other cases, the text of the tweet appears in its entirety inside the body of the article. This suggests that the user chose the sentence that either seemed to be the most conclusive contribution of the article, or expressed the opinion of the user to be tweeted. An example for this is detailed in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 643,
"end": 650,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exact Match Calculations",
"sec_num": "4.1"
},
{
"text": "Apart from the 9 times where the tweet was matched with title in the article, we also checked to see if the tweet text matched with the article titles that were separately extracted by the newspaper package in order to determine if tweets could be generated using the headline generation methods. We found that it did not match with the titles. However, even though there are no exact matches, there might still be matches where the tweet is a slight modification of the headline of the article, and can be measured using a partial match measure. Tweet @PNHP: 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exact Match Calculations",
"sec_num": "4.1"
},
{
"text": "Renounce punitive and counterproductive measures such as sealing the borders, http://t.co/LRLS2MhPRE #Ebola Title Physicians for a National Health Program Text As health professionals and trainees, we call on President Obama to take the following immediate steps to address the Ebola crisis... 6. Renounce punitive and counterproductive measures such as sealing the borders, and take steps to address the... Table 5 : Example where tweet is extracted as is from the text, matched portion in bold. ",
"cite_spans": [],
"ref_spans": [
{
"start": 408,
"end": 415,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Exact Match Calculations",
"sec_num": "4.1"
},
{
"text": "Next, we did a percentage match with the text of the article. This was a bag-of-words check using unigram overlap between the tweet and the document. Let unigrams(x) be the set of unigrams for some text x, then u, the percentage of matching unigrams found between a given tweet, t and a given article, a, can be defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Percentage Match for Unigrams",
"sec_num": "4.2"
},
{
"text": "u = |unigrams(t) \u2229 unigrams(a)| |unigrams(t)| * 100",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Percentage Match for Unigrams",
"sec_num": "4.2"
},
{
"text": "(1) Figure 1 shows the percentage of matches in the tweet and the article text as compared to the number of unigrams in the tweet. The mean match percentage is 29.53% and standard deviation is 20.2%. The mean of this distribution shows that the number of matched unigrams from a tweet in the article are fairly low. Figure 2 shows the number of articles with a certain number of matching unigrams. The graph shows that the most common number of unigrams matched was 2. The number of articles with higher unigrams matched goes on decreasing. The slight rise at the end -more than 10 matched unigrams -is accounted for by the completely matched tweets described above.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 12,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 316,
"end": 324,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Percentage Match for Unigrams",
"sec_num": "4.2"
},
{
"text": "Similar to the unigram matching techniques, the bigram percentage matching was also calculated. The text of the tweet was converted into bigrams and we then looked for those bigrams in the article text. The percentage was calculated similar Figure 4 shows frequency of the number of tweet-article pairs for the number of bigrams matched. There are no matched bigrams for most of the pairs. A smaller number of articles had one matched bigram, and the number decreased until the end, where it increases a little at more than 10 matched bigrams because of exact tweet matches.",
"cite_spans": [],
"ref_spans": [
{
"start": 241,
"end": 249,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Percentage Match for Bigrams",
"sec_num": "4.3"
},
{
"text": "The next analysis checks for a significant word matching inside a three-sentence window inside the article text. We used a three sentence long window using the sentence boundary information obtained during preprocessing. A window of three sentences was chosen to give a smaller context for the tweet to be extracted from than the entire article. The number was chosen as a moderate context window size as not too small to reduce it to a sentence level, and not too big for the context to be diluted. This was done to investigate whether a pseudo-extractive multi-sentence compression approach could convert a small number of sentences into a tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Percentage Match Inside a Window in the Article Text",
"sec_num": "4.4"
},
{
"text": "After the text of the window was extracted, we performed a similar analysis as the last one, except on a smaller set of sentences. The matching percentages from all three-sentence windows in the articles were computed and the maximum out of these was taken for the final results. Let a sentence window w i be the set of three consecutive sentences starting from the sentence number i. For this window, the unigram match in the tweet t, and the window is the unigram match u calculated above. Then, the maximum match from all the windows, uw is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Percentage Match Inside a Window in the Article Text",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "uw = max w i \u2208S u(t, w i )",
"eq_num": "(3)"
}
],
"section": "Percentage Match Inside a Window in the Article Text",
"sec_num": "4.4"
},
{
"text": "The result from this experiment is shown in Figure 5 . Here, the mean of the values is 26.6% and deviation 17%. Again this shows that only a small proportion of tweets can be generated even with an approach that combines unigrams from multiple sentences in the article.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 52,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Percentage Match Inside a Window in the Article Text",
"sec_num": "4.4"
},
{
"text": "The percentage match analyses were a bag-ofwords approach that disregarded the order of the words inside the texts and tweets. To respect the order of the words in the sentence of the tweet, we also used the least common subsequence algorithm between the tweet text and the document Figure 5 : Percentages of common words in tweet and a three sentence window in the article. The maximum match from all percentages is chosen for an article. The red horizontal line is the mean is 26.6%, and standard deviation 17%.",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 291,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Longest Common Subsequence Match Inside a Window for the Text",
"sec_num": "4.5"
},
{
"text": "text. This subsequence matching was done inside a sentence window of 5 sentences. Again, the final result for the article was the window in which the maximum percentage was recorded among all windows. The percentage match was calculated using the number of words in the tweet as the denominator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Longest Common Subsequence Match Inside a Window for the Text",
"sec_num": "4.5"
},
{
"text": "If lcs(t, a) is the longest common subsequence between the tweet t and article a, unigrams(x) is the set of unigrams for a text x, then the percentage of match for the lcs as compared to the tweet, l is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Longest Common Subsequence Match Inside a Window for the Text",
"sec_num": "4.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "l = |lcs(t, a)| |unigrams(t)| * 100",
"eq_num": "(4)"
}
],
"section": "Longest Common Subsequence Match Inside a Window for the Text",
"sec_num": "4.5"
},
{
"text": "These numbers are shown in Figure 6 . The mean here is 44.6% and the standard deviation is 22.7%.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Longest Common Subsequence Match Inside a Window for the Text",
"sec_num": "4.5"
},
{
"text": "As seen in the results of the analyses performed in Section 4, the tweets have little in common with the articles they are linked to. This shows that extractive summarization algorithms can only recover a small proportion of the indicative tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction with Formality",
"sec_num": "5"
},
{
"text": "To tie in the results of the findings above with some intuitive notions about the text and see how formality interacts with the results, we also calculated the formality of the articles. This formality score was correlated with the longest common subsequence measure that we defined above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction with Formality",
"sec_num": "5"
},
{
"text": "We assume that the formality of an article can be estimated by the formality of the words and Figure 6 : Percentages of words matching in tweet and document text using an LCS algorithm. Mean is 44.6%, which is shown by the red horizontal line, and standard deviation is 22.7%. phrases in the article. We used the formality lexicon of Brooke and Hirst (2013) . They calculate formality scores for words and sentences by training a model on a large corpus based on the appearance of words in specific documents. Their model represents words as vectors and the formal and informal seeds appear in opposite halves of the graphs, suggesting that we can use these seeds to determine if an article is formal or informal. The lexicon consists of words and phrases and their degree of formality. Thus, more formal words are marked on a positive scale and informal words like those occurring in colloquial language are marked on a negative scale.",
"cite_spans": [
{
"start": 334,
"end": 357,
"text": "Brooke and Hirst (2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 94,
"end": 102,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interaction with Formality",
"sec_num": "5"
},
{
"text": "Let the set of formality expressions from the lexicon be L, and the formality score for an expression e be score(e). Let the set of all substrings from the article substrings(a) be S. Then, the formality score f for an article a is the number of formal expressions per 10 words in article is f = e\u2208L&e\u2208S score(e) |unigrams(a)| * 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interaction with Formality",
"sec_num": "5"
},
{
"text": "The formality lexicon gave positive weights for formal expressions and negative for informal expressions. When we computed f using both formal and informal expressions, we found that the informal words predominated and \"swamped\" the signal of the formal words, leading to incomprehensible results. Thus, we discarded the informal words and used only the weights from the formal words in our final calculations. To check that these formality scores made sense intuitively, we calculated the average formality score for the articles belonging to each hashtag and ordered them, as shown in Table 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 587,
"end": 594,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interaction with Formality",
"sec_num": "5"
},
{
"text": "Highest #theforceawakens #KevinVickers #TaylorSwift #erdogan #winteriscoming #apec Table 6 : Table of hashtags (broadly, topics) with highest and lowest formality according to the lexicon.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lowest",
"sec_num": null
},
{
"text": "This formality score for each article was correlated with the percentage of match obtained using the longest common subsequence algorithm. The Pearson correlation value was 0.41, with a pvalue of 7.08e-66, indicating that the interaction between formality and overlap was highly significant. Hence, we can say that the more formal the subject or the article, the better the tweet can be extracted from the article. Table 7 gives an example of the formality of the article, which is a low 4.2 formality words per 10 words, where the tweet is not extracted from the article, but rephrased from the article instead.",
"cite_spans": [],
"ref_spans": [
{
"start": 415,
"end": 422,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lowest",
"sec_num": null
},
{
"text": "Why Buffalo got clobbered with snow and Toronto did not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tweet @globetoronto:",
"sec_num": null
},
{
"text": "#weather #snowstorm http://t.co/gcwwoDPZmX... http://t.co/BXY7EH6F3u\" Title What caused Buffalos massive snow and why Toronto got lucky Text Torontonians have long been the butt of jokes about calling in the army every time a few snow flurries whip by... Table 7 : Example of a tweet, title of the article where the formality of the article is over the mean, and the tweet is extracted from the article.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tweet @globetoronto:",
"sec_num": null
},
{
"text": "We speculate that tweets associated with less formal articles may contain more abbreviations and non-standard words or spellings, which decreases the amount of overlap. We plan to experiment with tweet normalization systems to account for this factor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tweet @globetoronto:",
"sec_num": null
},
{
"text": "Having presented the above statistics showing that only a small portion of indicative tweets can be recovered from the article they link to if viewed as an extractive summarization problem, the question then becomes, how should we view the process of tweet generation?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "We think that one promising direction is to model more explicitly the intent, or the purpose of the tweets. There have been several studies on classifying intents in tweets, but in many cases the intents are general, high-level intents of the tweets, more akin to classifying the topic or genre of the tweet than the intent. Wang et al. (2015) classify intents as food and drink, travel, career and so on, ones that can directly be used as intents for purchasing and can be utilized for advertisements. They also focus on finding tweets with intent and then classifying those. Banerjee et al. (2012) analyze real time data to detect presence of intents in tweets. G\u00f3mez-Adorno et al. (2014) use features from text and stylistics to determine user intentions, which are classified as news report, opinion, publicity and so on. Mohammad et al. (2013) study the classification of user intents specifically for tweets related to elections. They study one election and classify tweets as ones that agree or disagree with the candidate, ones that are meant for humour, support and so on.",
"cite_spans": [
{
"start": 325,
"end": 343,
"text": "Wang et al. (2015)",
"ref_id": "BIBREF20"
},
{
"start": 577,
"end": 599,
"text": "Banerjee et al. (2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "These definitions of intent, while a promising start, will not be sufficient for tweet generation. For this purpose, intent would be the reason the user chose to share the article with that particular text. This would include reasons like support some cause, promote a product or an article, agree or disagree with an event, or express an opinion about it. Identifying these intents will help provide parameters for generating tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6"
},
{
"text": "We have described a study that investigates whether indicative tweet generation can be viewed as an extractive summarization problem. By analyzing a collection of indicative tweets that we collected according to measures inspired by extractive summarization evaluation measures, we find that most tweets cannot be recovered from the article that they link to, demonstrating a limit to the effectiveness of extractive methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We further performed an analysis to determine the role of formality differences between the source article and the Twitter genre. We find evidence that formality is an important factor, as the less formal the source article is, the less extrac-tive the tweets seem to be. Future methods that can change the level of formality of a piece of text without changing the contents will be needed, as will those that explicitly consider the intended use of the tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://about.twitter.com/company",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pypi.python.org/pypi/newspaper",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their comments and suggestions, and Julian Brooke for the formality lexicon used in a part of this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards analyzing micro-blogs for detection and classification of real-time intentions",
"authors": [
{
"first": "Nilanjan",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Chakraborty",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Mittal",
"suffix": ""
},
{
"first": "Angshu",
"middle": [],
"last": "Rai",
"suffix": ""
},
{
"first": "Balaraman",
"middle": [],
"last": "Ravindran",
"suffix": ""
}
],
"year": 2012,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "391--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nilanjan Banerjee, Dipanjan Chakraborty, Anupam Joshi, Sumit Mittal, Angshu Rai, and Balaraman Ravindran. 2012. Towards analyzing micro-blogs for detection and classification of real-time inten- tions. In ICWSM, pages 391-394.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A multidimensional bayesian approach to lexical style",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Brooke",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2013,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "673--679",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian Brooke and Graeme Hirst. 2013. A multi- dimensional bayesian approach to lexical style. In HLT-NAACL, pages 673-679.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Event summarization using tweets",
"authors": [
{
"first": "Deepayan",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
},
{
"first": "Kunal",
"middle": [],
"last": "Punera",
"suffix": ""
}
],
"year": 2011,
"venue": "ICWSM",
"volume": "11",
"issue": "",
"pages": "66--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deepayan Chakrabarti and Kunal Punera. 2011. Event summarization using tweets. ICWSM, 11:66-73.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Extracting diverse sentiment expressions with target-dependent polarity from twitter",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wenbo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Meenakshi",
"middle": [],
"last": "Nagarajan",
"suffix": ""
},
{
"first": "Shaojun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amit P",
"middle": [],
"last": "Sheth",
"suffix": ""
}
],
"year": 2012,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Chen, Wenbo Wang, Meenakshi Nagarajan, Shao- jun Wang, and Amit P Sheth. 2012. Extracting diverse sentiment expressions with target-dependent polarity from twitter. In ICWSM, pages 50-57.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Towards robust abstractive multi-document summarization: A caseframe analysis of centrality and domain",
"authors": [
{
"first": "Jackie",
"middle": [
"Chi"
],
"last": "",
"suffix": ""
},
{
"first": "Kit",
"middle": [],
"last": "Cheung",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Penn",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL (1)",
"volume": "",
"issue": "",
"pages": "1233--1242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jackie Chi Kit Cheung and Gerald Penn. 2013. To- wards robust abstractive multi-document summa- rization: A caseframe analysis of centrality and do- main. In ACL (1), pages 1233-1242.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Topic-focused multi-document summarization using an approximate oracle score",
"authors": [
{
"first": "M",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Judith",
"middle": [
"D"
],
"last": "Conroy",
"suffix": ""
},
{
"first": "Dianne P O'",
"middle": [],
"last": "Schlesinger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leary",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL on Main conference poster sessions",
"volume": "",
"issue": "",
"pages": "152--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M Conroy, Judith D Schlesinger, and Dianne P O'Leary. 2006. Topic-focused multi-document summarization using an approximate oracle score. In Proceedings of the COLING/ACL on Main con- ference poster sessions, pages 152-159. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Entropy-based classification of'retweeting'activity on twitter",
"authors": [
{
"first": "Rumi",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Tawan",
"middle": [],
"last": "Surachawala",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Lerman",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1106.0346"
]
},
"num": null,
"urls": [],
"raw_text": "Rumi Ghosh, Tawan Surachawala, and Kristina Lerman. 2011. Entropy-based classification of'retweeting'activity on twitter. arXiv preprint arXiv:1106.0346.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Content and style features for automatic detection of users intentions in tweets",
"authors": [
{
"first": "Helena",
"middle": [],
"last": "G\u00f3mez-Adorno",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Pinto",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Montes",
"suffix": ""
},
{
"first": "Grigori",
"middle": [],
"last": "Sidorov",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Alfaro",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Artificial Intelligence-IBERAMIA 2014",
"volume": "",
"issue": "",
"pages": "120--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helena G\u00f3mez-Adorno, David Pinto, Manuel Montes, Grigori Sidorov, and Rodrigo Alfaro. 2014. Con- tent and style features for automatic detection of users intentions in tweets. In Advances in Artifi- cial Intelligence-IBERAMIA 2014, pages 120-128. Springer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Comparing presentation summaries: slides vs. reading vs. listening",
"authors": [
{
"first": "Liwei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Sanocki",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Grudin",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the SIGCHI conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "177--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liwei He, Elizabeth Sanocki, Anoop Gupta, and Jonathan Grudin. 2000. Comparing presentation summaries: slides vs. reading vs. listening. In Pro- ceedings of the SIGCHI conference on Human Fac- tors in Computing Systems, pages 177-184. ACM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Comparing twitter summarization algorithms for multiple post summaries",
"authors": [
{
"first": "David",
"middle": [],
"last": "Inouye",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalita",
"suffix": ""
}
],
"year": 2011,
"venue": "Privacy, Security, Risk and Trust (PASSAT) and 2011 IEEE Third Inernational Conference on Social Computing",
"volume": "",
"issue": "",
"pages": "298--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Inouye and Jugal K Kalita. 2011. Com- paring twitter summarization algorithms for mul- tiple post summaries. In Privacy, Security, Risk and Trust (PASSAT) and 2011 IEEE Third Iner- national Conference on Social Computing (Social- Com), 2011 IEEE Third International Conference on, pages 298-306. IEEE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A dependency parser for tweets",
"authors": [
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Archna",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1001--1012",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A. Smith. 2014. A dependency parser for tweets. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1001-1012, Doha, Qatar, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out: Proceedings of the ACL-04 Workshop",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Work- shop, pages 74-81.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Towards automatic tweet generation: A comparative study from the text summarization perspective in the journalism genre",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Lloret",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Palomar",
"suffix": ""
}
],
"year": 2013,
"venue": "Expert Systems with Applications",
"volume": "40",
"issue": "16",
"pages": "6624--6630",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Lloret and Manuel Palomar. 2013. Towards automatic tweet generation: A comparative study from the text summarization perspective in the jour- nalism genre. Expert Systems with Applications, 40(16):6624-6630.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "iparticipate: Automatic tweet generation from local government data",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Lofi",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Krestel",
"suffix": ""
}
],
"year": 2012,
"venue": "Database Systems for Advanced Applications",
"volume": "",
"issue": "",
"pages": "295--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Lofi and Ralf Krestel. 2012. iparticipate: Automatic tweet generation from local government data. In Database Systems for Advanced Applica- tions, pages 295-298. Springer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic summarization",
"authors": [
{
"first": "Inderjeet",
"middle": [],
"last": "Mani",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inderjeet Mani. 2001. Automatic summarization, vol- ume 3. John Benjamins Publishing.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David Mc- Closky. 2014. The stanford corenlp natural lan- guage processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computa- tional Linguistics: System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Identifying purpose behind electoral tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martin",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Second International Workshop on Issues of Sentiment Discovery and Opinion Mining",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad, Svetlana Kiritchenko, and Joel Martin. 2013. Identifying purpose behind elec- toral tweets. In Proceedings of the Second Interna- tional Workshop on Issues of Sentiment Discovery and Opinion Mining, pages 1-9. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Tweetmotif: Exploratory search and topic summarization for Twitter",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Krieger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ahn",
"suffix": ""
}
],
"year": 2010,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "384--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan O'Connor, Michel Krieger, and David Ahn. 2010. Tweetmotif: Exploratory search and topic summarization for Twitter. In ICWSM, pages 384- 385.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improved part-of-speech tagging for online conversational text with word clusters",
"authors": [
{
"first": "Olutobi",
"middle": [],
"last": "Owoputi",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Schneider",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "380--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 380-390, Atlanta, Georgia, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Socially-informed timeline generation for complex events",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Galen",
"middle": [],
"last": "Marchetti",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "1055--1065",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Wang, Claire Cardie, and Galen Marchetti. 2014. Socially-informed timeline generation for complex events. In Proceedings of Human Language Tech- nologies: The 2015 Annual Conference of the North American Chapter of the ACL, pages 1055-1065.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Mining user intents in twitter: A semi-supervised approach to inferring intent categories for tweets",
"authors": [
{
"first": "Jinpeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Gao",
"middle": [],
"last": "Cong",
"suffix": ""
},
{
"first": "Xin",
"middle": [
"Wayne"
],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "Twenty-Ninth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "339--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinpeng Wang, Gao Cong, Xin Wayne Zhao, and Xi- aoming Li. 2015. Mining user intents in twitter: A semi-supervised approach to inferring intent cate- gories for tweets. In Twenty-Ninth AAAI Conference on Artificial Intelligence, pages 339-345.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Utilizing microblogs for automatic news highlights extraction",
"authors": [
{
"first": "Zhongyu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "872--883",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongyu Wei and Wei Gao. 2014. Utilizing mi- croblogs for automatic news highlights extraction. In Proceedings of COLING 2014, the 25th Inter- national Conference on Computational Linguistics: Technical Papers, pages 872-883.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Distribution of unigram match percentage over unique tweets and articles. The mean is 29.53%, indicated by the red horizontal line, with a standard deviation of 20.2%",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Histogram of number of unique tweetarticle pairs vs number of unigrams matched. The mean number of unigrams matched per tweetarticle pair is 3.9.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Distribution of bigram match percentage over the tweet-article pair. The mean here is 10.73% shown by the red horizontal line, with a standard deviation of 18.5% Histogram of number of unique tweetarticle pairs vs number of bigrams matched. The mean number of bigrams matched per article is 1.9. to the unigram matching done earlier. For the set of bigrams for a text x, bigrams(x), percentage of matching bigrams b for the tweet t and article a is: shows the percentages of matched bigrams found. The mean is 10.73 with a standard deviation of 18.5. As seen in the figure, most of the tweet-article pairs have no matched bigrams. The percentage then increases to reflect the complete matches found above.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Examples of the hashtags used for extraction, grouped into various categories.",
"num": null,
"content": "<table><tr><td/><td>Science &amp; Technology</td></tr><tr><td>#apec2014</td><td>#rosetta</td></tr><tr><td>#G20</td><td>#lollipop</td></tr><tr><td>#oscarpistorius</td><td>#mangalayan</td></tr><tr><td>Events</td><td>Films and Pop culture</td></tr><tr><td>#haiyan</td><td>#TaylorSwift</td></tr><tr><td>#memorialday</td><td>#theforceawakens</td></tr><tr><td>#ottawashootings</td><td>#johnoliver</td></tr><tr><td>International</td><td>Sports</td></tr><tr><td>#berlinwall</td><td>#ausvssa</td></tr><tr><td>#ebola</td><td>#playingitmyway</td></tr><tr><td>#erdogan</td><td>#nycmarathon</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF1": {
"text": "Tweet Officer Wilson will be returned to active duty if no indictment, says #Ferguson Police Chief http://t.co/zrRIBxMUYJ Title Jackson clarifies comments on Wilson's future status Text ...Chief Jackson said if the grand jury does not indict Wilson, he will immediately return to active duty....",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "Example of a tweet, title of the article and the text when tweet can be extracted from text. The matched portions of the tweet and article are in bold.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}