ACL-OCL / Base_JSON /prefixW /json /W16 /W16-0403.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W16-0403",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:52:27.671727Z"
},
"title": "Rumor Identification and Belief Investigation on Twitter",
"authors": [
{
"first": "Sardar",
"middle": [],
"last": "Hamidian",
"suffix": "",
"affiliation": {},
"email": "sardar@gwu.edu"
},
{
"first": "Mona",
"middle": [
"T"
],
"last": "Diab",
"suffix": "",
"affiliation": {},
"email": "mtdiab@gwu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.",
"pdf_parse": {
"paper_id": "W16-0403",
"_pdf_hash": "",
"abstract": [
{
"text": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Traditionally television, radio channels, and newspapers were the only news sources available. They are still the top trusted news sources but there is a large new trend toward digital sources. A considerable ratio of newspaper readers now read them digitally and the number of people relying on social media as a news source doubled since 2010. Social media helps you post your news online by a single click, this feasibility leads novel breaking news to show up first on micro blogs. Twitter is one of the most popular microblogging platforms with more than 250 million users. Accessibility, speed and ease-of-use have made Twitter a valuable platform to read and share information. However, the same features which make Twitter or any microblogging platform a great resource, but combined with lack of supervision make them fertile grounds for malicious or accidental misinformation in social media. Accordingly, this can lead to harmful incidences especially in sensitive circumstances, which then could cause damaging effects on individuals and society. There are many information seekers who do not rely on a single source to get information, but this is not always a good solution since even other news outlets sometime rely on social media when it comes to novel breaking news. Smart phones enable everyone to capture and tweet every single moment hours before TV cameras arrive. Considering that, social media is an appealing option for those who crave novel tempting news but on the other hand, could deceive anyone by well-structured and formatted rumors. In this study we work on a standard dataset of rumors collected by Qazvinian et al. (Qazvinian et al., 2011) . In their work, the definition of rumor is defined as a statement whose truth value is unverifiable or deliberately false. We are using the same definition and not investigating the stimulus behind rumors creation. We investigate the problem of detecting rumors in Twitter data. We start with the motivation behind this research, and then the history of similar studies about rumors is overviewed. Then the overall pipeline is exposed, in which we adopt a supervised machine learning framework, and then we investigate the belief change for president Obama rumors in three years, and finally, we compare our results to the current state of the art performance on the task.",
"cite_spans": [
{
"start": 1634,
"end": 1675,
"text": "Qazvinian et al. (Qazvinian et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We prove that our approach yields superior results in comparison to other works to date.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is an extension body of related works on trustworthiness and misinformation detection. In this section we only focus on closely related works on the Natural Language Processing field that concentrate on information propagation and trustworthiness on social media, and specially on Twitter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "After the earthquake and tsunami occurred in Japan on March 11th 2012, Takahashi and Igata, (Takahashi and Igata, 2012) targeted two sets of related rumor tweets about the earthquake. They create the model to detect other candidate rumor tweets relying on a sequence of processes. Takahashi and Igata detect the target rumor list using the entities and then the re-tweet ratio for target rumors is calculated, and finally the clue keywords get extracted by analyzing the scoring of each content word w, using the ratio of word occurrence in correction tweets (num in correction(w)) over rumor tweets (num in rumor(w)). In a similar study, Soroush, (Vosoughi, 2015) proposes his two step rumor detection and verification model on the Boston Marathon bombing tweets. The Hierarchical-clustering model is applied for rumor detection, and after the feature engineering process, which contains linguistic, user identity, and pragmatic features, he adopts the Hidden Markov model to find the veracity of each rumor. Soroush also analyses the sentiment classification of tweets using the contextual Information, which shows how tweets in different spatial, temporal, and authorial contexts have, on average, different sentiments. Sina is the popular Chinese microbloging platform like Twitter. Yang et al. (Yang et al., 2012 ) studied the rumors classification problem on both Twitter and Sina. He extended his primary features including content, client, account, location, and propagation by adding client-based features, which refers to a program that is being used to post on a microblog and also the location-based feature, which is a binary feature, that indicates being inside or outside of China. Yang et al. cover a significant range of meta-data features and fewer sentiment and con- (Hamidiain and Diab, 2015 )(S15) we used the V11 data set with a new set of features, more labels, different machine learning, and an experimental approach. We proposed Rumor Detection and Classification (RDC) within the context of microblogging social media and suggested Single-step and Two-step models (SRDC and TRDC) in a supervised manner and investigate the effectiveness of the proposed list of features and various preprocessing tasks.",
"cite_spans": [
{
"start": 71,
"end": 119,
"text": "Takahashi and Igata, (Takahashi and Igata, 2012)",
"ref_id": "BIBREF6"
},
{
"start": 648,
"end": 664,
"text": "(Vosoughi, 2015)",
"ref_id": "BIBREF7"
},
{
"start": 1299,
"end": 1317,
"text": "(Yang et al., 2012",
"ref_id": "BIBREF9"
},
{
"start": 1786,
"end": 1811,
"text": "(Hamidiain and Diab, 2015",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Social media and Trustworthiness",
"sec_num": "2.1"
},
{
"text": "3 Problem Definition and Approach S15 and V11 results indicate that content features outperform other features in the Rumor Retrieval (RR) task. In this study we perform the rumor retrieval task with a new set of features. We employ content unigram feature, which lead to the highest results in among the content features. We employ the Tweet Latent Vector (TLV) to overcome the missing word and short length tweet issue. We extend the V11 data set to investigate the belief change for the specific rumor in different years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Social media and Trustworthiness",
"sec_num": "2.1"
},
{
"text": "V11 published an annotated Twitter data set for the five different established rumors as listed in Table 1 . The general annotation guidelines are presented in Table 2 . The original data set as obtained from V11 did not contain the actual tweets for both the Obama and Cellphone rumors, but they only contained the tweet IDs. Hence, we used the Twitter search API for downloading the specific tweets using the tweet ID. Accordingly, the size of our data set is different from that of V11 amounting to 9000 tweets in total for our experimentation as it is shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 160,
"end": 167,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 565,
"end": 572,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The following examples are a sample of each of the annotation labels 0 (If the tweet is not about the rumor,) 11(If the tweet endorses the rumor,) and 12 (if the tweet denies the rumor) from the Obama rumor collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "\u2022 0: 2010-09-24 15:12:32 , nina1236 , Obama:Muslims2019 Right To Build A Manhattan Mosque: While celebrating Ramadan with Muslims at the White House, Presi... http://bit.ly/c0J2aI",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "\u2022 11: 2010-09-28 18:36:47 , Phanti , RT @IPlantSeeds: Obama Admits He Is A Muslim http://post.ly/10Sf7 -I thought he did that before he was elected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "\u2022 12: 2010-10-01 05:00:28 , secksaddict , barack obama was raised a christian he attended a church with jeremiah wright yet people still beleive hes a muslim",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "V11 uses Twitter search API with regular expression queries, and collects data from the period of 2009 to 2010. We also run the same queries with the same keywords for the Obama rumor and collected more than 7000 tweets from 2014 and 2016. Collected tweets are labeled by applying the Rumor Retrieval (RR) pipeline. We named the new data as silver-data and use them to investigate how belief has changed toward the \"Is Barak Obama Muslim?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Silver Data",
"sec_num": "3.2"
},
{
"text": "rumor from 2010 to 2016. Table 3 shows statistics for the extracted tweets and silver-data. We labeled the silver-data as 0(Non-Rumor), 11(Believe), and merged 12 . For tagging the silver data we used the original Obama data set as the train data set. Table 5 shows what labels are being used for the rumors retrieval and silver-data creation experiment. ",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 252,
"end": 259,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Silver Data",
"sec_num": "3.2"
},
{
"text": "In designing the new set of features for the Rumor Retrieval (RR) task we considered two key points. First, addressing the missing words and length issue in Twitter (TLV) and second, extracting a feature that implies the user's belief about each rumor. We also present and conduct RR experiment applying S15 features as one of our baselines. We designed and employed a new set of features in S15 which are tagged by \"*\" in Table 6 . Untagged features represent the features that are used in V11.",
"cite_spans": [],
"ref_spans": [
{
"start": 423,
"end": 430,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "3.3"
},
{
"text": "The main intuition behind TLV is to create the latent vector representative of each tweet, since in most of the tweets, there are too few observed words to tell us what the sentence is about. We assume that the semantic space of both the observed and missing words make up the complete semantic profile of a sentence. We propose the Tweet Latent Vector (TLV) feature by applying the Semantic Textual Similarity (STS) model proposed by (Guo and Diab, 2012) (Guo et al., 2014) , which built on the Word-Net+Wiktionary+Brown+training data set. STS preprocess each short text by tokenization and stemming, then changes the preprocessed data by removing infrequent words and TF-IDF weighting, and finally uses the model to extract the latent semantics, which is represented as a 100-dimension vector.",
"cite_spans": [
{
"start": 435,
"end": 455,
"text": "(Guo and Diab, 2012)",
"ref_id": "BIBREF0"
},
{
"start": 456,
"end": 474,
"text": "(Guo et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tweet Latent Vector (TLV)",
"sec_num": "3.3.1"
},
{
"text": "For the belief feature we investigate the level of committed belief for each tweet, which is a modality in natural language, and indicates the author's belief in a proposition. We relied on the Werner et al. (Werner et al., 2015) belief tagger to tag the Committed Belief as(CB) where someone(SW) strongly believes in the proposition, Non-committed belief (NCB) where SW reflects a weak belief in the proposition, and Non Attributable Belief (NA) where SW is not (or could not be) expressing a belief in the proposition (e.g., desires, questions etc.) There is also the ROB tag where SW's intention is to report on SW else's stated belief, whether or not they themselves believe it. The feature values are set to a binary 0 or 1 for each CB, NCB, NA, and ROB corresponding to unseen or observed. The following example illustrates how the belief feature values are created. Did yall <NA>know</NA> 1 in 5 people <CB>thought</CB> obama is a Muslim Feature Values : CB:1 NCB:0 NA:1 ROB:0",
"cite_spans": [
{
"start": 208,
"end": 229,
"text": "(Werner et al., 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Committed Belief",
"sec_num": "3.3.2"
},
{
"text": "Similar to the content lexical features proposed in S15 and V11 we use the bag of word (BOW) feature set comprised of word unigrams. The feature values are set to a binary 0 or 1 for the word unigram vector representative of each tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Unigram",
"sec_num": "3.3.3"
},
{
"text": "All the experiments are conducted and evaluated based on various experimental settings. We utilized different data sets, features, and machine learning approaches, which are elaborated in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Design",
"sec_num": "4"
},
{
"text": "We conduct our experiments with two data sets: for the RR experiment we use the mixed data set (MIX) which comprises all the data from the five rumors. We split each of the three data sets into 80% train, 10% development, and 10% test. For the belief investigation experiment we only rely on the Obama dataset. After tagging the silver data by applying the RR model, we randomly select 400 rumors (200 believer-11 and 200 denier-12) from 2010(Gold Data), 2014, and 2016(Silver-data), and investigate how tweet writer's beliefs about the Obama rumors have changed in recent years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "For the RR experiment we adopt three baselines: Majority, S15 features, and the V11 model. The Majority baseline assigns the majority label from the training data set to all the test data. In the S15 baseline we perform the RR experiment by relying on the features that are proposed in S15 and shown in Table 6 . We performed the RR experiment with different models in Weka platform and chose the SMO, which yield to the highest result in this experiment. We also compared our results with V11, which reported the results as Mean Average Precision. 6",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 310,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "4.2"
},
{
"text": "For the experiments we employ SVM Tree Kernel model, which was proposed by Alessandro Moschitti (Moschitti, 2004) . In another experiment, we perform the RR task by applying S15 features, which are illustrated at Table 6 by hiring the SMO classifier on Weka (Hall et al., 2009) .",
"cite_spans": [
{
"start": 96,
"end": 113,
"text": "(Moschitti, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 258,
"end": 277,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 213,
"end": 220,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Machine Learning Tools",
"sec_num": "4.3"
},
{
"text": "We implement two main experimental pipelines: Rumor Retrieval (RR) and Belief investigation. Content and TLV features are employed for the RR task and then we conduct our experiment in two different phases. In the development phase we utilized development data for tuning. Then the model, which could reach the highest performance, is used on the test data set. Evaluating the performance of the proposed technique in rumor detection should rely on both the number of relevant rumors that are selected (recall) and the number of selected rumors that are relevant (precision), since both of them are presented in this work. In another experiment we investigate the belief change in the Obama rumors. We define two scores for analyzing the belief for the rumor poster/ writer. T i CB and T i N CB are defined for each rumor in the Obama data set. Each of T i Belief T ag corresponds to a number of seen tag in each tweet. We calculate the belief scores for each Obama rumor dataset separately. We apply formula 1 on believer (11) and denier (12) rumor in the Obama data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluations",
"sec_num": "4.4"
},
{
"text": "#R 11 Belief T ag #R 12 Belief T ag",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluations",
"sec_num": "4.4"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Evaluations",
"sec_num": "4.4"
},
{
"text": "In this section the impact of different experimental setups are discussed. We first elaborate on each experiments and then compare our methodology with the baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We perform the RR task by applying two sets of features and compare the results with the three baselines. We perform the RR experiment by employing the gold data set to detect Not-Rumor(0 and 2) and Rumor(11, 12, 13, and 14) in one-step two-way classification experiment. For the S15 baseline we applied all the 15 features listed in Table 6 . We investigated the performance of different classifiers including J48( Decision Tree), Naive Base( NB,) and SMO and picked SMO which has outperformed the others. In similar experiment for the TLV task we employe TLV and Content features by applying the SVM Tree Kernel model, which lead to 0.972, 0.99 for the MIX and 0.971, 1.0 (precision and Recall) for the Obama gold data set. Table 7 shows how we outperform the other baselines (Majority, S15, and V11) by employing the proposed features. ",
"cite_spans": [
{
"start": 199,
"end": 208,
"text": "Rumor(11,",
"ref_id": null
},
{
"start": 209,
"end": 212,
"text": "12,",
"ref_id": null
},
{
"start": 213,
"end": 216,
"text": "13,",
"ref_id": null
},
{
"start": 217,
"end": 224,
"text": "and 14)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 334,
"end": 341,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 726,
"end": 733,
"text": "Table 7",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Rumors Retrieval",
"sec_num": "5.1"
},
{
"text": "We propose formula 1 to measure the belief score for the Obama data set in different years. Then we investigate how the Committed Belief (CB) and Non Committed Belief (NCB) have changed among rumor believers as well as deniers from 2010 to 2016. Figure-1 shows the Committed Belief and Non-Committed Belief scores among the three data sets. Scores above one mean that the number of the committed belief words in rumor believers is more than in rumor deniers. It is interesting to see that the belief score for the all three years are higher than one. A simple interpretation of that would be, in all 2010, 2014, and 2016, people who were rumor believers in \"Obama Being Muslim\" show more belief than those who deny Obama being Muslim. On the other hand we see the NCB ratio, which is less than one for the same years. NCB means when SW presents a weak belief towards something. Having below one for NCB could be interpreted as deniers showing weak belief toward the fact that Obama is not a Muslim in 2010, 2014, and 2016. It is important to state that by receiving more data, we can attain more accurate behavior belief.",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 254,
"text": "Figure-1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Belief Analysis",
"sec_num": "5.2"
},
{
"text": "In this paper, we proposed and studied the impact of Tweet Latent Vector and Belief on the problem of Rumor Detection in the context of twitter data. A new set of features are employed in our experiments to boost the overall performance of rumor retrieval and give better results in comparison to the other similar body work. We also proposed and analyzed the belief change model among rumor believers and deniers by defining the belief score. We are planning to expand the proposed methodology and investigate the trustworthiness problem from the belief and sentiment points of view and apply the model for streaming data on social media.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "This paper is based upon work supported by the DARPA DEFT Program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Modeling sentences in the latent space",
"authors": [
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "864--872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiwei Guo and Mona Diab. 2012. Modeling sentences in the latent space. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics: Long Papers-Volume 1, pages 864-872. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Fast tweet retrieval with compact binary codes",
"authors": [
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mona",
"middle": [
"T"
],
"last": "Diab",
"suffix": ""
}
],
"year": 2014,
"venue": "COL-ING",
"volume": "",
"issue": "",
"pages": "486--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiwei Guo, Wei Liu, and Mona T Diab. 2014. Fast tweet retrieval with compact binary codes. In COL- ING, pages 486-496. Citeseer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The weka data mining software: an update",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM SIGKDD explorations newsletter",
"volume": "11",
"issue": "1",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The weka data mining software: an update. ACM SIGKDD explorations newsletter, 11(1):10-18.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Rumor detection and classification for twitter data",
"authors": [
{
"first": "Sardar",
"middle": [],
"last": "Hamidiain",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2015,
"venue": "The Fifth International Conference on Social Media Technologies, Communication, and Informatics, SOTICS, IARIA",
"volume": "",
"issue": "",
"pages": "71--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sardar Hamidiain and Mona Diab. 2015. Rumor detec- tion and classification for twitter data. The Fifth In- ternational Conference on Social Media Technologies, Communication, and Informatics, SOTICS, IARIA, pages 71-77.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A study on convolution kernels for shallow semantic parsing",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 335. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti. 2004. A study on convolution ker- nels for shallow semantic parsing. In Proceedings of the 42nd Annual Meeting on Association for Compu- tational Linguistics, page 335. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Rumor has it: Identifying misinformation in microblogs",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Vahed Qazvinian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosengren",
"suffix": ""
},
{
"first": "Qiaozhu",
"middle": [],
"last": "Dragomir R Radev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mei",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1589--1599",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vahed Qazvinian, Emily Rosengren, Dragomir R Radev, and Qiaozhu Mei. 2011. Rumor has it: Identify- ing misinformation in microblogs. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pages 1589-1599. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Rumor detection on twitter",
"authors": [
{
"first": "Tatsuro",
"middle": [],
"last": "Takahashi",
"suffix": ""
},
{
"first": "Nobuyuki",
"middle": [],
"last": "Igata",
"suffix": ""
}
],
"year": 2012,
"venue": "Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS), 2012 Joint 6th International Conference on",
"volume": "",
"issue": "",
"pages": "452--457",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tatsuro Takahashi and Nobuyuki Igata. 2012. Rumor detection on twitter. In Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS), 2012 Joint 6th In- ternational Conference on, pages 452-457. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic detection and verification of rumors on Twitter",
"authors": [
{
"first": "Soroush",
"middle": [],
"last": "Vosoughi",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soroush Vosoughi. 2015. Automatic detection and ver- ification of rumors on Twitter. Ph.D. thesis, Mas- sachusetts Institute of Technology.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Committed belief tagging on the factbank and lu corpora: A comparative study. ExProM",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gregory",
"suffix": ""
},
{
"first": "Vinodkumar",
"middle": [],
"last": "Werner",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Prabhakaran",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory J Werner, Vinodkumar Prabhakaran, Mona Diab, and Owen Rambow. 2015. Committed belief tagging on the factbank and lu corpora: A comparative study. ExProM 2015, page 32.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic detection of rumor on sina weibo",
"authors": [
{
"first": "Fan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaohui",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fan Yang, Yang Liu, Xiaohui Yu, and Min Yang. 2012. Automatic detection of rumor on sina weibo. In Pro- ceedings of the ACM SIGKDD Workshop on Mining Data Semantics, page 13. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The CB and NCB score for the Obama Data set in2010, 2014, and 2016"
},
"TABREF0": {
"html": null,
"text": "List of Annotated Rumors (Qazvinian et al, 2011)",
"type_str": "table",
"content": "<table><tr><td>Rumor</td><td>Rumor Reference</td><td># of tweets</td></tr><tr><td>Obama</td><td>Is Barack Obama muslim?</td><td>4975</td></tr><tr><td>Michele</td><td>Michelle Obama hired many staff</td><td>299</td></tr><tr><td/><td>members?</td><td/></tr><tr><td colspan=\"2\">Cellphone Cell phone numbers going public?</td><td>215</td></tr><tr><td>Palin</td><td>Sarah Palin getting divorced?</td><td>4423</td></tr><tr><td colspan=\"2\">AirFrance Air France mid-air crash photos?</td><td>505</td></tr><tr><td colspan=\"3\">textual features in the aforementioned work. The</td></tr><tr><td colspan=\"3\">most relevant related works to ours are Qazvinian</td></tr><tr><td colspan=\"3\">et al. (Qazvinian et al., 2011)(V11) which use three</td></tr><tr><td colspan=\"3\">sets of features, including content-based, network-</td></tr></table>",
"num": null
},
"TABREF1": {
"html": null,
"text": "Rumor Detection Annotation Guidelines0If the tweet is not about the rumor 11 If the tweet endorses the rumor 12 If the tweet denies the rumor 13 If the tweet questions the rumor 14 If the tweet is neutral 2If the annotator is undetermined",
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF2": {
"html": null,
"text": "List Of Annotated Tweets Per Label Per Rumor",
"type_str": "table",
"content": "<table><tr><td>Rumor</td><td>0</td><td>11</td><td>12</td><td colspan=\"3\">13 14 2</td><td>Total</td></tr><tr><td>Obama</td><td>945</td><td>689</td><td>410</td><td colspan=\"4\">160 224 1232 3666</td></tr><tr><td>Michelle</td><td>83</td><td>191</td><td>24</td><td>1</td><td>0</td><td>0</td><td>299</td></tr><tr><td>Palin</td><td>86</td><td colspan=\"5\">1709 1895 639 94 0</td><td>4423</td></tr><tr><td>Cellphone</td><td>92</td><td>65</td><td>3</td><td>3</td><td>3</td><td>0</td><td>166</td></tr><tr><td colspan=\"2\">Air France 306</td><td>71</td><td>114</td><td colspan=\"2\">14 0</td><td>0</td><td>505</td></tr><tr><td>Mix</td><td colspan=\"7\">1512 2725 2452 817 321 1232 9059</td></tr></table>",
"num": null
},
"TABREF3": {
"html": null,
"text": "List of Tweets in Silver Data",
"type_str": "table",
"content": "<table><tr><td>0 non-rumor</td><td>11 Believe</td><td>12 (Deny/ Doubtful/ Neutral)</td><td>Total</td></tr><tr><td>Obama2014 2940</td><td>3055</td><td>678</td><td>3738</td></tr><tr><td>Obama2016 1250</td><td>856</td><td>379</td><td>2485</td></tr></table>",
"num": null
},
"TABREF4": {
"html": null,
"text": "Labels Used in Rumor Retrieval and Rumor Type",
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Classification for Silver Data</td><td/></tr><tr><td>1st Step</td><td/><td>2nd Step</td></tr><tr><td>Method</td><td>Labels</td><td>Labels</td></tr><tr><td colspan=\"2\">(2-way, 2 step) (0,2)(11-14)</td><td>(11)(12,13,14)</td></tr></table>",
"num": null
},
"TABREF5": {
"html": null,
"text": "List of S15 features used for RR Experiment .",
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF6": {
"html": null,
"text": "Precision and Recall Of RR task by Employing",
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">TLV+Content Unigram, S15, and V11 is reported as Mean Av-</td></tr><tr><td>erage Precision (MAP)</td><td/><td/><td/></tr><tr><td colspan=\"3\">Data Method S15(pr,rec) V11</td><td>TLV</td></tr><tr><td colspan=\"2\">Majority 0.51,0.71</td><td>-</td><td>-</td></tr><tr><td>MIX RR</td><td>0.94,0.94</td><td colspan=\"2\">0.965 0.972,0.99</td></tr><tr><td colspan=\"2\">Majority 0.27,0.52</td><td>-</td><td>-</td></tr><tr><td>Obama RR</td><td>0.91,0.91</td><td>--</td><td>0.971,1.0</td></tr></table>",
"num": null
}
}
}
}