ACL-OCL / Base_JSON /prefixR /json /R19 /R19-1002.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:02:31.529295Z"
},
"title": "Identification of Good and Bad News on Twitter",
"authors": [
{
"first": "Piush",
"middle": [],
"last": "Aggarwal",
"suffix": "",
"affiliation": {},
"email": "piush.aggarwal@stud.uni-due.de"
},
{
"first": "Ahmet",
"middle": [],
"last": "Aker",
"suffix": "",
"affiliation": {},
"email": "a.aker@is.inf.uni-due.de"
},
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Geoffrey",
"middle": [
"E 2008"
],
"last": "Hinton",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Social media plays a great role in news dissemination which includes good and bad news. However, studies show that news, in general, has a significant impact on our mental stature and that this influence is more in bad news. An ideal situation would be that we have a tool that can help to filter out the type of news we do not want to consume. In this paper, we provide the basis for such a tool. In our work, we focus on Twitter. We release a manually annotated dataset containing 6,853 tweets from 5 different topical categories. Each tweet is annotated with good and bad labels. We also investigate various machine learning systems and features and evaluate their performance on the newly generated dataset. We also perform a comparative analysis with sentiments showing that sentiment alone is not enough to distinguish between good and bad news. Stuart N. Soroka. 2006. Good news and bad news: Asymmetric responses to economic information.",
"pdf_parse": {
"paper_id": "R19-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "Social media plays a great role in news dissemination which includes good and bad news. However, studies show that news, in general, has a significant impact on our mental stature and that this influence is more in bad news. An ideal situation would be that we have a tool that can help to filter out the type of news we do not want to consume. In this paper, we provide the basis for such a tool. In our work, we focus on Twitter. We release a manually annotated dataset containing 6,853 tweets from 5 different topical categories. Each tweet is annotated with good and bad labels. We also investigate various machine learning systems and features and evaluate their performance on the newly generated dataset. We also perform a comparative analysis with sentiments showing that sentiment alone is not enough to distinguish between good and bad news. Stuart N. Soroka. 2006. Good news and bad news: Asymmetric responses to economic information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social media sites like Twitter, Facebook, Reddit, etc. have become a major source of information seeking. They provide chances to users to shout to the world in search of vanity, attention or just shameless self-promotion. There is a lot of personal discussions but at the same time, there is a base of useful knowledgeable content which is worthy enough to consider for the public interest. For example in Twitter, tweets may report about news related to recent events such as natural or man-made disasters, discoveries made, local or global election outcomes, health reports, financial updates, etc. In all cases, there are good and bad news scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Studies show that news, in general, has a significant impact on our mental stature (Johnston and Davey, 1997) . However, it is also demonstrated that the influence of bad news is more significant than good news (Soroka, 2006; Baumeister et al., 2001) and that due to the natural negativity bias, as described by (Rozin and Royzman, 2001 ), humans may end up consuming more bad than good news. Since bad news travels faster than good news (Kamins et al., 1997; Hansen et al., 2011) the consumption may increase. This is a real threat to the society as according to medical doctors and, psychologists exposure to bad news may have severe and long-lasting negative effects for our well being and lead to stress, anxiety, and depression (Johnston and Davey, 1997) . (Milgrom, 1981; BRAUN et al., 1995; Conrad et al., 2002; Soroka, 2006) describe crucial role of good and bad news on financial markets. For instance, bad news about unemployment is likely to affect stock markets and in turn, the overall economy (Boyd et al., 2005) . Differentiating between good and bad news may help readers to combat this issue and a system that filters news based on the content may enable them to control the amount of bad news they are consuming.",
"cite_spans": [
{
"start": 83,
"end": 109,
"text": "(Johnston and Davey, 1997)",
"ref_id": "BIBREF14"
},
{
"start": 211,
"end": 225,
"text": "(Soroka, 2006;",
"ref_id": "BIBREF28"
},
{
"start": 226,
"end": 250,
"text": "Baumeister et al., 2001)",
"ref_id": "BIBREF2"
},
{
"start": 312,
"end": 336,
"text": "(Rozin and Royzman, 2001",
"ref_id": "BIBREF26"
},
{
"start": 438,
"end": 459,
"text": "(Kamins et al., 1997;",
"ref_id": "BIBREF15"
},
{
"start": 460,
"end": 480,
"text": "Hansen et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 733,
"end": 759,
"text": "(Johnston and Davey, 1997)",
"ref_id": "BIBREF14"
},
{
"start": 762,
"end": 777,
"text": "(Milgrom, 1981;",
"ref_id": "BIBREF20"
},
{
"start": 778,
"end": 797,
"text": "BRAUN et al., 1995;",
"ref_id": "BIBREF4"
},
{
"start": 798,
"end": 818,
"text": "Conrad et al., 2002;",
"ref_id": "BIBREF5"
},
{
"start": 819,
"end": 832,
"text": "Soroka, 2006)",
"ref_id": "BIBREF28"
},
{
"start": 1007,
"end": 1026,
"text": "(Boyd et al., 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The aim of this paper is to provide the basis to develop such a filtering system to help readers in their selection process. We focus on Twitter and aim to develop such a filtering system for tweets. On this respect the contributions of this work are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We introduce a new task, namely the distinction between good and bad news on Twitter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We provide the community with a new gold standard dataset containing 6,893 tweets. Each tweet is labeled either as good or bad.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, this is the first dataset containing tweets with good and bad labels. The dataset is publicly accessible and can be used for further research 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Provide guidelines to annotate good/bad news on Twitter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We implement several features approaches and report their performances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The dataset covers diverse domains. We also show out-of-domain experiments and report system performances when they are trained on in-domain and tested on out-of-domain data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following, we first discuss related work. In Section 3 we discuss the guidelines that we use to annotate tweets and gather our dataset. Section 4 provides description about the data itself. In Section 5 we describe several baseline systems performing the good and bad news classification as well as features used to guide the systems. Finally, we conclude the paper in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In terms of classifying tweets into the good and bad classes no prior work exists. The closest studies to our work, are those performing sentiment classification in Twitter (Nakov et al., 2016; Rosenthal et al., 2017) . Kouloumpis et al. (2011) use n-gram, lexicon, part of speech and micro-blogging features for detecting sentiment in tweets. Similar features are used by Go (2009) . More recently researchers also investigated deep learning strategies to tackle the tweet level sentiment problem (Severyn and Moschitti, 2015; Ren et al., 2016) . Twitter is multi-lingual and in Mozeti\u010d et al. (2016) the idea of multi-lingual sentiment classification is investigated. The task, as well as approaches proposed for determining tweet level sentiment, are nicely summarized in the survey paper of Kharde et al. (2016) . However, Balahur et al. (2010) reports that there is no link between good and bad news with positive and negative sentiment respectively.",
"cite_spans": [
{
"start": 173,
"end": 193,
"text": "(Nakov et al., 2016;",
"ref_id": "BIBREF22"
},
{
"start": 194,
"end": 217,
"text": "Rosenthal et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 220,
"end": 244,
"text": "Kouloumpis et al. (2011)",
"ref_id": "BIBREF18"
},
{
"start": 373,
"end": 382,
"text": "Go (2009)",
"ref_id": "BIBREF10"
},
{
"start": 498,
"end": 527,
"text": "(Severyn and Moschitti, 2015;",
"ref_id": "BIBREF27"
},
{
"start": 528,
"end": 545,
"text": "Ren et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 580,
"end": 601,
"text": "Mozeti\u010d et al. (2016)",
"ref_id": "BIBREF21"
},
{
"start": 795,
"end": 815,
"text": "Kharde et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 827,
"end": 848,
"text": "Balahur et al. (2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Thus, unlike related work, we do tweet level good vs. bad news classification. We also show that similar to Balahur et al. (2010) , there is no evidence that positive sentiment implies good news and negative sentiment bad news. ",
"cite_spans": [
{
"start": 108,
"end": 129,
"text": "Balahur et al. (2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "News can be good for one section of society but bad for other section. For example, win or loss related news are always subjective. In such cases, agreement towards news types (good or bad) is quite low. On the other hand, news related to natural disaster, geographical changes, humanity, women empowerment, etc. show very high agreement. Therefore, while defining news types, topicality plays an important role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Good vs Bad News",
"sec_num": "3"
},
{
"text": "We consider news as good news if it relates to low subjective topics and includes positive overtones such as recoveries, breakthroughs, cures, wins, and celebrations (Harcup and ONeill, 2017) and also beneficial for an individual, a group or society. An example of good news is shown in Figure 1 . In contrary to that, the bad news is defined as when it relates to the low subjective topic and include negative overtones such as death, injury, defeat, loss and is not beneficial for an individual, a group or society. An example of bad news is shown in Figure 2 . Based on these definitions/guidelines we have gathered our dataset (see next Section) of tweets containing the good and bad labels. we retrieve the examples from Twitter using its API 2 . Next, we discard non-English tweets and re-tweets. We also remove duplicates based on lower-cased first four words of tweets keeping only the first one. Thereafter, we filter only those tweets which can be regarded as news by using an in house SVM classifier (Aggarwal, 2019) . This classifier is trained on tweets annotated with the labels news and not news. We use this classifier to remove not news tweets from the annotation task 3 . We select only tweets where the classifier prediction probability is greater than or equal to 80%. In Table 1 , we provide information about the topics and categories as well as statistics about the collected tweets that will be used for annotation (column collected).",
"cite_spans": [
{
"start": 166,
"end": 191,
"text": "(Harcup and ONeill, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 1011,
"end": 1027,
"text": "(Aggarwal, 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 287,
"end": 295,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 553,
"end": 561,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 1292,
"end": 1299,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Good vs Bad News",
"sec_num": "3"
},
{
"text": "Data Annotation For data annotation, we use the figure-eight crowdsourcing service 4 . Before uploading our collected examples, we carried out a round of trial annotation of 300 randomly selected instances from our tweet collection corpus. The aim of the trial annotation was",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "\u2022 to ensure the newsworthiness quality of our collected examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "\u2022 to create test questions to ensure the quality of the annotators, for the rest of the data, which was carried out using crowdsourcing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "\u2022 to test our guidelines described in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "We ask three annotators 5 to classify the selected examples into good and bad news. We also allowed a third category cannot say. We computed Fleiss' kappa Fleiss (1971) on the trial dataset for the three annotators. The value is 0.605 which indicates rather a high agreement. We used 247 instances agreed by all the three annotators as test questions for the crowdsourcing platform.",
"cite_spans": [
{
"start": 141,
"end": 168,
"text": "Fleiss' kappa Fleiss (1971)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "During the crowd annotation, we showed each annotator 5 tweets per page and paid 3 US Cents per tweet. For maintaining quality standards, in addition to the test questions, we applied a restriction so that annotation could be performed only by people from English speaking countries. We also made sure that each annotation was performed maximum by 7 annotators and that an annotator agreement of min. 70% was met. Note if the agreement of 70% was met with fewer annotators then the system would not force an annotation to be done by 7 annotators but would finish earlier. The system requires 7 annotators if the minimum agreement requirement is not met. We only choose instances that are annotated by atleast 3 annotators. In addition to the good and bad news categories we also ask annotators to mandatory provide their confidence score (range between 0-100%) for the label they have annotated 6 . We discarded all the tweets where we did not have at least 3 annotators with each having min. 50% confidence value. We also discarded tweets that are annotated by less than three annotators. We use a total 7,212 tweets to annotate. After all filterings, we remained with 6,853 instances which were classified as good and bad news. Topic-wise number of successful annotations are displayed in the fourth column of Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1312,
"end": 1319,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "Inter Annotator Agreement To calculate agreement between the annotators of the crowdsourcing annotation results, we select the top three confident annotator labels for each sample. Based on this, we record an agreement of 0.614 as Fleiss' Kappa (Fleiss, 1971 ) score indicating a good agreement among the annotators. We also claim stability in our annotation task because of the score similarity with that of trail annotation.",
"cite_spans": [
{
"start": 245,
"end": 258,
"text": "(Fleiss, 1971",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "We experiment with several machine learning approaches and features. Before using the tweets in decision making, we also apply a simple preprocessing on them. In the following, we briefly outline these.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "5"
},
{
"text": "We use the ArkTokenizer (Gimpel et al., 2011) to tokenize the tweets. In addition to tokenization, we do lowercasing and remove digits if available in text.",
"cite_spans": [
{
"start": 24,
"end": 45,
"text": "(Gimpel et al., 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "5.1"
},
{
"text": "We extract nine features for each tweet and divide them into Structural, TF-IDF and Embeddings features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "Emoticons: We extract all the emoticons from the training data and use them as a binary feature, i.e. does a tweet contain a particular emoticon or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural features",
"sec_num": "5.2.1"
},
{
"text": "Interjections: We use existing list of interjections 7 and use them similar to Emoticons as binary feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural features",
"sec_num": "5.2.1"
},
{
"text": "Lexicons: We use existing positive and negative lexicons 8 and use them as a binary feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural features",
"sec_num": "5.2.1"
},
{
"text": "We use the textblob 9 tool to compute sentiment score over each tweet. The score varies between -1 (negative) to 1 (positive).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment:",
"sec_num": null
},
{
"text": "This feature includes 36 different pos-tags (uni-gram) and are used as binary features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS-Tag:",
"sec_num": null
},
{
"text": "Significant terms: Using tf-idf values we also extract the top 300 terms (uni-gram and bi-gram, 300 in each case) from the training data and use them as binary features. Note, we extract for good and bad news separate uni-grams and bi-grams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS-Tag:",
"sec_num": null
},
{
"text": "Tweet Characteristics: This feature contains tweet specific characterstics such as the number of favorite counts, tweet replies count and number of re-tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS-Tag:",
"sec_num": null
},
{
"text": "In this case, we simply use the training data to create a vocabulary of terms and use this vocabulary to extract features from each tweet. We use tf-idf representation for each vocabulary term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TF-IDF",
"sec_num": "5.2.2"
},
{
"text": "Finally, we also use fasttext based embedding (Mikolov et al., 2018) vectors which are trained on common crawl having 600 billion tokens.",
"cite_spans": [
{
"start": 46,
"end": 68,
"text": "(Mikolov et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings",
"sec_num": "5.2.3"
},
{
"text": "We investigate 8 classifiers for our task including Multi-Layer Perceptron (MLPC), Support Vector Machine with linear (LSVC) and rbf (SVC) kernel, K Nearest Neighbour (KNN), Logistic Regression (LR), Random Forest (RF), XGBoost (XGB) and Decision Tree (DT). In addition, we also fine-tune BERT-base model (Devlin et al., 2018) . Each classifier, except the BERT, has been trained and tested on each possible combination of the three feature types.",
"cite_spans": [
{
"start": 305,
"end": 326,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "5.3"
},
{
"text": "Overall results We performed a stratified 5fold cross-validation. We evaluate each resulting model on a held-back development dataset containing 264 good news postings and 764 bad news ones. The 5-fold cross validation has been performed on the training data containing 4,332 bad news and 1,493 good news instances. For BERT-base model with its pre-trained embedding features: .92 each model, we use grid-search method to select the hyper-parameters with best model's efficiency. The results reported are those obtained on the test data and are summarized in Table 2 . Overall we see that the performances of the classifiers are all highly satisfactory. Among the more traditional approaches, the best performance is obtained through SVC, LSVC, and LR. We see also that these approaches work best when embeddings along with tf-idf features are used, although LSVC achieves the same results when all features are used. However, the best performance is achieved with the BERT-base model leading to 92% F 1 score. We computed also significance test using paired t-test between BERT and more traditional machine learning approaches 10 . However, after Bonferroni correction (p < 0.007) we found no significant difference between BERT and the other systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 559,
"end": 566,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Structural feature analysis We also evaluate the structural features of the task independently (Figure 3) . For this, we use the SVC classifier as it is one of the best performing traditional methods. From the figure, we see that the significant term feature gives the best performance. The difference to the other features is greater than the 23% F 1 score. The differences are also significant after Bonferroni correction (p < 0.008). In Table 3 we list some frequent uni-grams from the significant good and bad term lists. From the table, we see that the terms are certainly good indicators for distinguishing between the two classes. 10 We always use the best result for every system. Sentiment for good-vs-bad news We also tested whether sentiment score can predict good vs. bad news as Naveed et al. (2011) found a relationship between these two. For this, we use the textblob sentiment scorer and classify any tweet as good news when its sentiment score is greater than 0 otherwise bad. Using this strategy we could only achieve an F 1 score of 55%. This shows that tackling the good/bad news classification task using sentiment scores is not appropriate. This also confirms the findings of Balahur et al. (2010) .",
"cite_spans": [
{
"start": 638,
"end": 640,
"text": "10",
"ref_id": null
},
{
"start": 792,
"end": 812,
"text": "Naveed et al. (2011)",
"ref_id": "BIBREF23"
},
{
"start": 1198,
"end": 1219,
"text": "Balahur et al. (2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 95,
"end": 105,
"text": "(Figure 3)",
"ref_id": null
},
{
"start": 440,
"end": 447,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Out-of-domain experiments We also investigate how stable the models are when they are trained on in-domains and tested on out-of-domain data. For this purpose, we split our dataset into a training set consisting of all examples except instances belonging to the health category. We use four of the best-performing systems (BERT, SVC, LSVC, and LR) to train on this training set. The resulting models are tested on the held-out health data. Results are shown in Figure 4 . From Figure 4 we see that BERT is stable and achieve an F 1 score of 84%. The performance of the other system drop by a great margin to the max. 67% F 1 score. From this, we can conclude that BERT is a better system to use for good-vs-bad Twitter news classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 461,
"end": 469,
"text": "Figure 4",
"ref_id": null
},
{
"start": 477,
"end": 486,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Detailed analysis on BERT Our overall but also the out-of-domain experiments show that BERT is outperforming the more traditional machine learning approaches. On the overall (1,028 testing instances) results, BERT fails only to classify 63 cases correctly. Using t-SNE distribution (van der Maaten and Hinton, 2008), we analyse BERT's 12th layer embedding vectors (having 300 dimensions) for random 100 test points ( Figure 5 ). The analysis shows that BERT can classify semantics of good and bad news instances correctly even the instances are in proximity. From Figure 5 , we see that mostly outliers are misclassified.",
"cite_spans": [],
"ref_spans": [
{
"start": 417,
"end": 425,
"text": "Figure 5",
"ref_id": "FIGREF4"
},
{
"start": 564,
"end": 572,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In this paper, we presented a new dataset having 6,853 tweet post examples annotated with good and bad news labels. This dataset will be publicly available for the research community. We also presented a comparative analysis of supervised classification methods. We investigated nine different feature types and 8 different machine learning classifiers. The most robust result in our analysis was the contribution of the BERT-base model in in-domain but also in out-of-domain evaluations. Among structural features, significant terms significantly outperform the rest. We also showed that sentiment scores are not appropriate to classify good-vs-bad news. In our future work, we plan to expand our investigation by including other features. We also plan to propose this model for the good-bad classification of news articles. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "DatasetData collection To collect tweets for annotation, we first choose low subjective ten topics which can be divided into five different categories. Then,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.tweepy.org 3 Since we want humans to annotate tweets as good and bad news we apply this approach to filter tweets that are not news at all and so avoid our annotators spending valuable time on annotating tweets that are not our target.4 https://www.figure-eight.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All are post-graduate students who are fluent in English and use Twitter to post information on a daily basis.6 We found this strategy better than providing the option cannot say and later allowed us to discard annotations where the confidence score was less than 50%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.vidarholen.net/contents/ interjections/ 8 http://www.cs.uic.edu/\u02dcliub/FBS/ sentiment-analysis.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://textblob.readthedocs.io/en/ dev/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant No. ZE 915/7-1 \"Data and Knowledge Processing (DKPro) -A middleware for language technology\", by the Global Young Faculty 11 and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -GRK 2167, Research Training Group \"User-Centred Social Media\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Classification approaches to identify informative tweets",
"authors": [
{
"first": "Piush",
"middle": [],
"last": "Aggarwal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Student Research Workshop Associated with RANLP 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piush Aggarwal. 2019. Classification approaches to identify informative tweets. In Proceedings of the Student Research Workshop Associated with RANLP 2019. Varna, Bulgaria, page To be published.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Sentiment analysis in the news",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Balahur",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "Mijail",
"middle": [],
"last": "Kabadjov",
"suffix": ""
},
{
"first": "Vanni",
"middle": [],
"last": "Zavarella",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Van Der Goot",
"suffix": ""
},
{
"first": "Matina",
"middle": [],
"last": "Halkia",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Pouliquen",
"suffix": ""
},
{
"first": "Jenya",
"middle": [],
"last": "Belyaeva",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Balahur, Ralf Steinberger, Mijail Kabadjov, Vanni Zavarella, Erik van der Goot, Matina Halkia, Bruno Pouliquen, and Jenya Belyaeva. 2010. Senti- ment analysis in the news.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bad is stronger than good",
"authors": [
{
"first": "Roy",
"middle": [
"F"
],
"last": "Baumeister",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Bratslavsky",
"suffix": ""
},
{
"first": "Catrin",
"middle": [],
"last": "Finkenauer",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"D"
],
"last": "Vohs",
"suffix": ""
}
],
"year": 2001,
"venue": "Review of General Psychology",
"volume": "5",
"issue": "4",
"pages": "323--370",
"other_ids": {
"DOI": [
"10.1037/1089-2680.5.4.323"
]
},
"num": null,
"urls": [],
"raw_text": "Roy F. Baumeister, Ellen Bratslavsky, Catrin Finke- nauer, and Kathleen D. Vohs. 2001. Bad is stronger than good. Review of General Psychology 5(4):323- 370. https://doi.org/10.1037/1089-2680.5.4.323. 11 https://www.global-young-faculty.de/",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The stock market's reaction to unemployment news: Why bad news is usually good for stocks",
"authors": [
{
"first": "John",
"middle": [
"H"
],
"last": "Boyd",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ravi",
"middle": [],
"last": "Jagannathan",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Finance",
"volume": "60",
"issue": "2",
"pages": "649--672",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John H. Boyd, Jian Hu, and Ravi Jagannathan. 2005. The stock market's reaction to unemployment news: Why bad news is usually good for stocks. Journal of Finance 60(2):649-672.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Good news, bad news, volatility, and betas",
"authors": [
{
"first": "Phillip",
"middle": [
"A"
],
"last": "Braun",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"B"
],
"last": "Nelson",
"suffix": ""
},
{
"first": "Alain",
"middle": [
"M"
],
"last": "Sunier",
"suffix": ""
}
],
"year": 1995,
"venue": "The Journal of Finance",
"volume": "50",
"issue": "5",
"pages": "1575--1603",
"other_ids": {
"DOI": [
"10.1111/j.1540-6261.1995.tb05189.x"
]
},
"num": null,
"urls": [],
"raw_text": "PHILLIP A. BRAUN, DANIEL B. NELSON, and ALAIN M. SUNIER. 1995. Good news, bad news, volatility, and betas. The Journal of Finance 50(5):1575-1603. https://doi.org/10.1111/j.1540- 6261.1995.tb05189.x.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "When is bad news really bad news?",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Conrad",
"suffix": ""
},
{
"first": "Bradford",
"middle": [],
"last": "Cornell",
"suffix": ""
},
{
"first": "Wayne",
"middle": [
"R"
],
"last": "Landsman",
"suffix": ""
}
],
"year": 2002,
"venue": "The Journal of Finance",
"volume": "57",
"issue": "6",
"pages": "2507--2532",
"other_ids": {
"DOI": [
"10.1111/1540-6261.00504"
]
},
"num": null,
"urls": [],
"raw_text": "Jennifer Conrad, Bradford Cornell, and Wayne R. Landsman. 2002. When is bad news really bad news? The Journal of Finance 57(6):2507-2532. https://doi.org/10.1111/1540-6261.00504.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for lan- guage understanding. CoRR abs/1810.04805.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "Joseph",
"middle": [
"L"
],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological Bulletin",
"volume": "76",
"issue": "5",
"pages": "378--382",
"other_ids": {
"DOI": [
"10.1037/h0031619"
]
},
"num": null,
"urls": [],
"raw_text": "Joseph L. Fleiss. 1971. Measuring nomi- nal scale agreement among many raters. Psychological Bulletin 76(5):378-382.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Part-of-speech tagging for twitter: Annotation, features, and experiments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Mills",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Flanigan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computa- tional Linguistics, Portland, Oregon, USA, pages 42-47. https://www.aclweb.org/anthology/P11- 2008.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentiment classification using distant supervision",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Go",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Go. 2009. Sentiment classification using distant supervision.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Good friends, bad news -affect and virality in twitter",
"authors": [
{
"first": "Lars",
"middle": [
"Kai"
],
"last": "Hansen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Arvidsson",
"suffix": ""
},
{
"first": "Elanor",
"middle": [],
"last": "Finn Aarup Nielsen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Colleoni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etter",
"suffix": ""
}
],
"year": 2011,
"venue": "Communications in Computer and Information Science",
"volume": "",
"issue": "",
"pages": "34--43",
"other_ids": {
"DOI": [
"10.1007/978-3-642-22309-9_5"
]
},
"num": null,
"urls": [],
"raw_text": "Lars Kai Hansen, Adam Arvidsson, Finn Aarup Nielsen, Elanor Colleoni, and Michael Etter. 2011. Good friends, bad news -affect and virality in twit- ter. In Communications in Computer and Informa- tion Science, Springer Berlin Heidelberg, pages 34- 43. https://doi.org/10.1007/978-3-642-22309-9 5.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "What is news?",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Harcup",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [],
"last": "Oneill",
"suffix": ""
}
],
"year": 2017,
"venue": "Journalism Studies",
"volume": "18",
"issue": "12",
"pages": "1470--1488",
"other_ids": {
"DOI": [
"10.1080/1461670X.2016.1150193"
]
},
"num": null,
"urls": [],
"raw_text": "Tony Harcup and Deirdre ONeill. 2017. What is news? Journalism Studies 18(12):1470-1488.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The psychological impact of negative tv news bulletins: The catastrophizing of personal worries",
"authors": [
{
"first": "Wendy",
"middle": [
"M"
],
"last": "Johnston",
"suffix": ""
},
{
"first": "C",
"middle": [
"L"
],
"last": "Graham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Davey",
"suffix": ""
}
],
"year": 1997,
"venue": "British Journal of Psychology",
"volume": "88",
"issue": "1",
"pages": "85--91",
"other_ids": {
"DOI": [
"10.1111/j.2044-8295.1997.tb02622.x"
]
},
"num": null,
"urls": [],
"raw_text": "Wendy M. Johnston and Graham C. L. Davey. 1997. The psychological impact of negative tv news bulletins: The catastrophizing of per- sonal worries. British Journal of Psychol- ogy 88(1):85-91. https://doi.org/10.1111/j.2044- 8295.1997.tb02622.x.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Consumer responses to rumors: Good news, bad news",
"authors": [
{
"first": "Michael",
"middle": [
"A"
],
"last": "Kamins",
"suffix": ""
},
{
"first": "Valerie",
"middle": [
"S"
],
"last": "Folkes",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Perner",
"suffix": ""
}
],
"year": 1997,
"venue": "Journal of Consumer Psychology",
"volume": "6",
"issue": "2",
"pages": "165--187",
"other_ids": {
"DOI": [
"10.1207/s15327663jcp0602_03"
]
},
"num": null,
"urls": [],
"raw_text": "Michael A. Kamins, Valerie S. Folkes, and Lars Perner. 1997. Consumer responses to rumors: Good news, bad news. Jour- nal of Consumer Psychology 6(2):165-187.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Sentiment analysis of twitter data: a survey of techniques",
"authors": [
{
"first": "Prof",
"middle": [],
"last": "Vishal Kharde",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sonawane",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1601.06971"
]
},
"num": null,
"urls": [],
"raw_text": "Vishal Kharde, Prof Sonawane, et al. 2016. Senti- ment analysis of twitter data: a survey of techniques. arXiv preprint arXiv:1601.06971 .",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Twitter sentiment analysis: The good the bad and the omg! In ICWSM",
"authors": [
{
"first": "Efthymios",
"middle": [],
"last": "Kouloumpis",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Johanna",
"middle": [
"D"
],
"last": "Moore",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Efthymios Kouloumpis, Theresa Wilson, and Jo- hanna D. Moore. 2011. Twitter sentiment analysis: The good the bad and the omg! In ICWSM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Advances in pre-training distributed word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Puhrsch",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Ad- vances in pre-training distributed word representa- tions. In Proceedings of the International Confer- ence on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Good news and bad news: Representation theorems and applications",
"authors": [
{
"first": "R",
"middle": [],
"last": "Paul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Milgrom",
"suffix": ""
}
],
"year": 1981,
"venue": "The Bell Journal of Economics",
"volume": "12",
"issue": "2",
"pages": "380--391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul R. Milgrom. 1981. Good news and bad news: Representation theorems and applications. The Bell Journal of Economics 12(2):380-391. http://www.jstor.org/stable/3003562.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multilingual twitter sentiment classification: The role of human annotators",
"authors": [
{
"first": "Igor",
"middle": [],
"last": "Mozeti\u010d",
"suffix": ""
},
{
"first": "Miha",
"middle": [],
"last": "Gr\u010dar",
"suffix": ""
},
{
"first": "Jasmina",
"middle": [],
"last": "Smailovi\u0107",
"suffix": ""
}
],
"year": 2016,
"venue": "PloS one",
"volume": "11",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor Mozeti\u010d, Miha Gr\u010dar, and Jasmina Smailovi\u0107. 2016. Multilingual twitter sentiment classifica- tion: The role of human annotators. PloS one 11(5):e0155036.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semeval-2016 task 4: Sentiment analysis in twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, and Veselin Stoyanov. 2016. Semeval- 2016 task 4: Sentiment analysis in twitter. In Pro- ceedings of the 10th international workshop on se- mantic evaluation (semeval-2016). pages 1-18.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bad news travel fast: A content-based analysis of interestingness on twitter",
"authors": [
{
"first": "Nasir",
"middle": [],
"last": "Naveed",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Gottron",
"suffix": ""
},
{
"first": "J\u00e9r\u00f4me",
"middle": [],
"last": "Kunegis",
"suffix": ""
},
{
"first": "Arifah Che",
"middle": [],
"last": "Alhadi",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 3rd International Web Science Conference",
"volume": "8",
"issue": "",
"pages": "1--8",
"other_ids": {
"DOI": [
"10.1145/2527031.2527052"
]
},
"num": null,
"urls": [],
"raw_text": "Nasir Naveed, Thomas Gottron, J\u00e9r\u00f4me Kunegis, and Arifah Che Alhadi. 2011. Bad news travel fast: A content-based analysis of interesting- ness on twitter. In Proceedings of the 3rd In- ternational Web Science Conference. ACM, New York, NY, USA, WebSci '11, pages 8:1-8:7. https://doi.org/10.1145/2527031.2527052.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Context-sensitive twitter sentiment classification using neural network",
"authors": [
{
"first": "Yafeng",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Donghong",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2016,
"venue": "Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yafeng Ren, Yue Zhang, Meishan Zhang, and Donghong Ji. 2016. Context-sensitive twitter sen- timent classification using neural network. In Thir- tieth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semeval-2017 task 4: Sentiment analysis in twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "502--518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. Semeval-2017 task 4: Sentiment analysis in twitter. In Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017). pages 502- 518.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Negativity bias, negativity dominance, and contagion",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Rozin",
"suffix": ""
},
{
"first": "Edward",
"middle": [
"B"
],
"last": "Royzman",
"suffix": ""
}
],
"year": 2001,
"venue": "Personality and Social Psychology Review",
"volume": "5",
"issue": "4",
"pages": "296--320",
"other_ids": {
"DOI": [
"10.1207/S15327957PSPR0504_2"
]
},
"num": null,
"urls": [],
"raw_text": "Paul Rozin and Edward B. Royzman. 2001. Negativity bias, negativity dominance, and contagion. Person- ality and Social Psychology Review 5(4):296-320. https://doi.org/10.1207/S15327957PSPR0504 2.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Unitn: Training deep convolutional neural network for twitter sentiment classification",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "464--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015. Unitn: Training deep convolutional neural network for twitter sentiment classification. In Proceedings of the 9th international workshop on semantic eval- uation (SemEval 2015). pages 464-469.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Good news and bad news: Asymmetric responses to economic information",
"authors": [
{
"first": "N",
"middle": [],
"last": "Stuart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Soroka",
"suffix": ""
}
],
"year": 2006,
"venue": "The Journal of Politics",
"volume": "68",
"issue": "2",
"pages": "372--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart N. Soroka. 2006. Good news and bad news: Asymmetric responses to economic information. The Journal of Politics 68(2):372-385.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "https://github.com/aggarwalpiush/ goodBadNewsTweet",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Good news tweet",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Bad news tweet",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Structural features' performance using the SVM classifier evaluated on the test set. Out-of-domain performance of different systems.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "t-SNE distribution of random 100 test points with Bert's performance. The pie chart displays the percentage of BERT's misclassifications on these points.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"num": null,
"text": "Categories, their topics, and distributions for the dataset generation.",
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"html": null,
"num": null,
"text": "F 1 (macro) scores of different classifiers on different feature types evaluated on the test data.",
"content": "<table><tr><td>Multi-</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"html": null,
"num": null,
"text": "Top uni-grams from the good and bad news significant term lists.",
"content": "<table/>",
"type_str": "table"
}
}
}
}