text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
Cities are living systems where urban infrastructures and their functions are defined and evolved due to population behaviors. Profiling the cities and functional regions has been an important topic in urban design and planning. This paper studies a unique big data set which includes daily movement data of tens of millions of city residents, and develop a visual analytics system, namely UrbanFACET, to discover and visualize the dynamical profiles of multiple cities and their residents. This big user movement data set, acquired from mobile users' agnostic check-ins at thousands of phone APPs, is well utilized in an integrative study and visualization together with urban structure (e.g., road network) and POI (Point of Interest) distributions. In particular, we novelly develop a set of information-theory based metrics to characterize the mobility patterns of city areas and groups of residents. These multifaceted metrics including Fluidity, vibrAncy, Commutation, divErsity, and densiTy (FACET) which categorize and manifest hidden urban functions and behaviors. UrbanFACET system further allows users to visually analyze and compare the metrics over different areas and cities in metropolitan scales. The system is evaluated through both case studies on several big and heavily populated cities, and user studies involving real-world users. | UrbanFACET: Visually Profiling Cities from Mobile Device Recorded
Movement Data of Millions of City Residents | 9,400 |
Ridesharing can save energy, reduce pollution, and alleviate congestion. Ridesharing systems allow peer-to-peer ride matching through mobile-phone applications. In many systems people have little information about the driver and co-riders. Trust heavily influences the usage of a ridesharing system, however, work in this area is limited. In this paper, we investigate the role of social media in shaping trust and delineate factors that influence the decisions of trust placed on a driver or co-rider. We design a ridesharing system where a rider can see the Instagram profile and 10 recent pictures of a co-rider and ask them to make judgments on six trust related questions. We discovered three important factors that shape trust forming: \textit{social proof}, \textit{social approval}, and \textit{self-disclosure}. These three factors account for 40\%, 33\%, and 13\% of the variance, and subsequent factors extracted accounted for substantially less variance. Using these three factors, we successfully build a trust prediction model with F-measure of 87.4\%, an improvement from 78.6\% on using all factors collected. | Who Will You Share a Ride With: Factors that Influence Trust of
Potential Rideshare Partners | 9,401 |
Viral videos can reach global penetration traveling through international channels of communication similarly to real diseases starting from a well-localized source. In past centuries, disease fronts propagated in a concentric spatial fashion from the the source of the outbreak via the short range human contact network. The emergence of long-distance air-travel changed these ancient patterns. However, recently, Brockmann and Helbing have shown that concentric propagation waves can be reinstated if propagation time and distance is measured in the flight-time and travel volume weighted underlying air-travel network. Here, we adopt this method for the analysis of viral meme propagation in Twitter messages, and define a similar weighted network distance in the communication network connecting countries and states of the World. We recover a wave-like behavior on average and assess the randomizing effect of non-locality of spreading. We show that similar result can be recovered from Google Trends data as well. | Video Pandemics: Worldwide Viral Spreading of Psy's Gangnam Style Video | 9,402 |
Detection of overlapping communities in real-world networks is a generally challenging task. Upon recognizing that a network is in fact the union of its egonets, a novel network representation using multi-way data structures is advocated in this contribution. The introduced sparse tensor-based representation exhibits richer structure compared to its matrix counterpart, and thus enables a more robust approach to community detection. To leverage this structure, a constrained tensor approximation framework is introduced using PARAFAC decomposition. The arising constrained trilinear optimization is handled via alternating minimization, where intermediate subproblems are solved using the alternating direction method of multipliers (ADMM) to ensure convergence. The factors obtained provide soft community memberships, which can further be exploited for crisp, and possibly-overlapping community assignments. The framework is further broadened to include time-varying graphs, where the edgeset as well as the underlying communities evolve through time. Performance of the proposed approach is assessed via tests on benchmark synthetic graphs as well as real-world networks. As corroborated by numerical tests, the proposed tensor-based representation captures multi-hop nodal connections, that is, connectivity patterns within single-hop neighbors, whose exploitation yields a more robust community identification in the presence of mixing as well as overlapping communities. | Identification of Overlapping Communities via Constrained Egonet Tensor
Decomposition | 9,403 |
TV dramas constitute an important part of the entertainment industry, with popular shows attracting millions of viewers and resulting in significant revenues. Finding a way to explore formally the social dynamics underpinning these show has therefore important implications, as it would allow us not only to understand which features are most likely to be associated with the popularity of a show, but also to explore the extent to which such fictional world have social interactions comparable with the real world. To begin tackling this question, we employed network analysis to systematically and quantitatively explore how the interactions between noble houses of the fantasy drama TV series Game of Thrones change as the show progresses. Our analysis discloses the invisible threads that connected different houses and shows how tension across the houses, as measure via structural balance, changes over time. To boost the impact of our analysis, we further extended our analysis to explore how different network features correlate with viewers engagement and appreciation of different episodes. This allowed us to derive an hierarchy of features that are associated with the audience response. All in all, our work show how network models may be able to capture social relations present in complex artificial worlds, thus providing a way to qualitatively model social interactions among fictional characters, hence allowing a minimal formal description of the unfolding of stories that can be instrumental in managing complex narratives. | Balance of thrones: a network study on Game of Thrones | 9,404 |
Video games and the playing thereof have been a fixture of American culture since their introduction in the arcades of the 1980s. However, it was not until the recent proliferation of broadband connections robust and fast enough to handle live video streaming that players of video games have transitioned from a content consumer role to a content producer role. Simultaneously, the rise of social media has revealed how interpersonal connections drive user engagement and interest. In this work, we discuss the recent proliferation of video game streaming, particularly on Twitch.tv, analyze trends and patterns in video game viewing, and develop predictive models for determining if a new game will have substantial impact on the streaming ecosystem. | Heroes and Zeroes: Predicting the Impact of New Video Games on Twitch.tv | 9,405 |
Knowledge about the graph structure of the Web is important for understanding this complex socio-technical system and for devising proper policies supporting its future development. Knowledge about the differences between clean and malicious parts of the Web is important for understanding potential treats to its users and for devising protection mechanisms. In this study, we conduct data science methods on a large crawl of surface and deep Web pages with the aim to increase such knowledge. To accomplish this, we answer the following questions. Which theoretical distributions explain important local characteristics and network properties of websites? How are these characteristics and properties different between clean and malicious (malware-affected) websites? What is the prediction power of local characteristics and network properties to classify malware websites? To the best of our knowledge, this is the first large-scale study describing the differences in global properties between malicious and clean parts of the Web. In other words, our work is building on and bridging the gap between \textit{Web science} that tackles large-scale graph representations and \textit{Web cyber security} that is concerned with malicious activities on the Web. The results presented herein can also help antivirus vendors in devising approaches to improve their detection algorithms. | Malware distributions and graph structure of the Web | 9,406 |
Modern cities are complex systems, evolving at a fast pace. Thus, many urban planning, political, and economic decisions require a deep and up-to-date understanding of the local context of urban neighborhoods. This study shows that the structure of openly available social media records, such as Twitter, offers a possibility for building a unique dynamic signature of urban neighborhood function, and, therefore, might be used as an efficient and simple decision support tool. Considering New York City as an example, we investigate how Twitter data can be used to decompose the urban landscape into self-defining zones, aligned with the functional properties of individual neighborhoods and their social and economic characteristics. We further explore the potential of these data for detecting events and evaluating their impact over time and space. This approach paves a way to a methodology for immediate quantification of the impact of urban development programs and the estimation of socioeconomic statistics at a finer spatial-temporal scale, thus allowing urban policy-makers to track neighborhood transformations and foresee undesirable changes in order to take early action before official statistics would be available. | Twitter Activity Timeline as a Signature of Urban Neighborhood | 9,407 |
With the irruption of ICTs and the crisis of political representation, many online platforms have been developed with the aim of improving participatory democratic processes. However, regarding platforms for online petitioning, previous research has not found examples of how to effectively introduce discussions, a crucial feature to promote deliberation. In this study we focus on the case of Decidim Barcelona, the online participatory-democracy platform launched by the City Council of Barcelona in which proposals can be discussed with an interface that combines threaded discussions and comment alignment with the proposal. This innovative approach allows to examine whether neutral, positive or negative comments are more likely to generate discussion cascades. The results reveal that, with this interface, comments marked as negatively aligned with the proposal were more likely to engage users in online discussions and, therefore, helped to promote deliberative decision making. | Deliberative Platform Design: The case study of the online discussions
in Decidim Barcelona | 9,408 |
We introduce the notion of "seminar users", who are social media users engaged in propaganda in support of a political entity. We develop a framework that can identify such users with 84.4% precision and 76.1% recall. While our dataset is from the Arab region, omitting language-specific features has only a minor impact on classification performance, and thus, our approach could work for detecting seminar users in other parts of the world and in other languages. We further explored a controversial political topic to observe the prevalence and potential potency of such users. In our case study, we found that 25% of the users engaged in the topic are in fact seminar users and their tweets make nearly a third of the on-topic tweets. Moreover, they are often successful in affecting mainstream discourse with coordinated hashtag campaigns. | Seminar Users in the Arabic Twitter Sphere | 9,409 |
It has been shown that community detection algorithms work better for clustering tasks than other, more popular methods, such as k-means. In fact, network analysis based methods often outperform more widely used methods and do not suffer from some of the drawbacks we notice elsewhere e.g. the number of clusters k usually has to be known in advance. However, stochastic block models which are known to perform well for community detection, have not yet been tested for this task. We discuss why these models cannot be directly applied to this problem and test the performance of a generalization of stochastic block models which work on weighted graphs and compare them to other clustering techniques. | Data clustering using stochastic block models | 9,410 |
The political discourse in Western European countries such as Germany has recently seen a resurgence of the topic of refugees, fueled by an influx of refugees from various Middle Eastern and African countries. Even though the topic of refugees evidently plays a large role in online and offline politics of the affected countries, the fact that protests against refugees stem from the right-wight political spectrum has lead to corresponding media to be shared in a decentralized fashion, making an analysis of the underlying social and mediatic networks difficult. In order to contribute to the analysis of these processes, we present a quantitative study of the social media activities of a contemporary nationwide protest movement against local refugee housing in Germany, which organizes itself via dedicated Facebook pages per city. We analyse data from 136 such protest pages in 2015, containing more than 46,000 posts and more than one million interactions by more than 200,000 users. In order to learn about the patterns of communication and interaction among users of far-right social media sites and pages, we investigate the temporal characteristics of the social media activities of this protest movement, as well as the connectedness of the interactions of its participants. We find several activity metrics such as the number of posts issued, discussion volume about crime and housing costs, negative polarity in comments, and user engagement to peak in late 2015, coinciding with chancellor Angela Merkel's much criticized decision of September 2015 to temporarily admit the entry of Syrian refugees to Germany. Furthermore, our evidence suggests a low degree of direct connectedness of participants in this movement, (i.a., indicated by a lack of geographical collaboration patterns), yet we encounter a strong affiliation of the pages' user base with far-right political parties. | 'Dark Germany': Hidden Patterns of Participation in Online Far-Right
Protests Against Refugee Housing | 9,411 |
A wide variety of online platforms use digital badges to encourage users to take certain types of desirable actions. However, despite their growing popularity, their causal effect on users' behavior is not well understood. This is partly due to the lack of counterfactual data and the myriad of complex factors that influence users' behavior over time. As a consequence, their design and deployment lacks general principles. In this paper, we focus on first-time badges, which are awarded after a user takes a particular type of action for the first time, and study their causal effect by harnessing the delayed introduction of several badges in a popular Q&A website. In doing so, we introduce a novel causal inference framework for badges whose main technical innovations are a robust survival-based hypothesis testing procedure, which controls for the utility heterogeneity across users, and a bootstrap difference-in-differences method, which controls for the random fluctuations in users' behavior over time. We find that first-time badges steer users' behavior if the utility a user obtains from taking the corresponding action is sufficiently low, otherwise, the badge does not have a significant effect. Moreover, for badges that successfully steered user behavior, we perform a counterfactual analysis and show that they significantly improved the functioning of the site at a community level. | Harnessing Natural Experiments to Quantify the Causal Effect of Badges | 9,412 |
Humans use various social bonding methods known as social grooming, e.g. face to face communication, greetings, phone, and social networking sites (SNS). SNS have drastically decreased time and distance constraints of social grooming. In this paper, I show that two types of social grooming (elaborate social grooming and lightweight social grooming) were discovered in a model constructed by thirteen communication data-sets including face to face, SNS, and Chacma baboons. The separation of social grooming methods is caused by a difference in the trade-off between the number and strength of social relationships. The trade-off of elaborate social grooming is weaker than the trade-off of lightweight social grooming. On the other hand, the time and effort of elaborate methods are higher than lightweight methods. Additionally, my model connects social grooming behaviour and social relationship forms with these trade-offs. By analyzing the model, I show that individuals tend to use elaborate social grooming to reinforce a few close relationships (e.g. face to face and Chacma baboons). In contrast, people tend to use lightweight social grooming to maintain many weak relationships (e.g. SNS). Humans with lightweight methods who live in significantly complex societies use various social grooming to effectively construct social relationships. | Two Types of Social Grooming Methods depending on the Trade-off between
the Number and Strength of Social Relationships | 9,413 |
The rise of a trending topic on Twitter or Facebook leads to the temporal emergence of a set of users currently interested in that topic. Given the temporary nature of the links between these users, being able to dynamically identify communities of users related to this trending topic would allow for a rapid spread of information. Indeed, individual users inside a community might receive recommendations of content generated by the other users, or the community as a whole could receive group recommendations, with new content related to that trending topic. In this paper, we tackle this challenge, by identifying coherent topic-dependent user groups, linking those who generate the content (creators) and those who spread this content, e.g., by retweeting/reposting it (distributors). This is a novel problem on group-to-group interactions in the context of recommender systems. Analysis on real-world Twitter data compare our proposal with a baseline approach that considers the retweeting activity, and validate it with standard metrics. Results show the effectiveness of our approach to identify communities interested in a topic where each includes content creators and content distributors, facilitating users' interactions and the spread of new information. | Detection of Trending Topic Communities: Bridging Content Creators and
Distributors | 9,414 |
In this paper, we provide a systematic analysis of the Twitter discussion on the 2016 Austrian presidential elections. In particular, we extracted and analyzed a data-set consisting of 343645 Twitter messages related to the 2016 Austrian presidential elections. Our analysis combines methods from network science, sentiment analysis, as well as bot detection. Among other things, we found that: a) the winner of the election (Alexander Van der Bellen) was considerably more popular and influential on Twitter than his opponent, b) the Twitter followers of Van der Bellen substantially participated in the spread of misinformation about him, c) there was a clear polarization in terms of the sentiments spread by Twitter followers of the two presidential candidates, d) the in-degree and out-degree distributions of the underlying communication network are heavy-tailed, and e) compared to other recent events, such as the 2016 Brexit referendum or the 2016 US presidential elections, only a very small number of bots participated in the Twitter discussion on the 2016 Austrian presidential election. | An Analysis of the Twitter Discussion on the 2016 Austrian Presidential
Elections | 9,415 |
In this paper, we propose a unified framework for sampling, clustering and embedding data points in semi-metric spaces. For a set of data points $\Omega=\{x_1, x_2, \ldots, x_n\}$ in a semi-metric space, we consider a complete graph with $n$ nodes and $n$ self edges and then map each data point in $\Omega$ to a node in the graph with the edge weight between two nodes being the distance between the corresponding two points in $\Omega$. By doing so, several well-known sampling techniques can be applied for clustering data points in a semi-metric space. One particularly interesting sampling technique is the exponentially twisted sampling in which one can specify the desired average distance from the sampling distribution to detect clusters with various resolutions. We also propose a softmax clustering algorithm that can perform a clustering and embed data points in a semi-metric space to a low dimensional Euclidean space. Our experimental results show that after a certain number of iterations of "training", our softmax algorithm can reveal the "topology" of the data from a high dimensional Euclidean. We also show that the eigendecomposition of a covariance matrix is equivalent to the principal component analysis (PCA). To deal with the hierarchical structure of clusters, our softmax clustering algorithm can also be used with a hierarchical clustering algorithm. For this, we propose a partitional-hierarchical algorithm, called $i$PHD, in this paper. Our experimental results show that those algorithms based on the maximization of normalized modularity tend to balance the sizes of detected clusters and thus do not perform well when the ground-truth clusters are different in sizes. Also, using a metric is better than using a semi-metric as the triangular inequality is not satisfied for a semi-metric and that is more prone to clustering errors. | A Unified Framework for Sampling, Clustering and Embedding Data Points
in Semi-Metric Spaces | 9,416 |
With rapid increase in online information consumption, especially via social media sites, there have been concerns on whether people are getting selective exposure to a biased subset of the information space, where a user is receiving more of what she already knows, and thereby potentially getting trapped in echo chambers or filter bubbles. Even though such concerns are being debated for some time, it is not clear how to quantify such echo chamber effect. In this position paper, we introduce Information Segregation (or Informational Segregation) measures, which follow the long lines of work on residential segregation. We believe that information segregation nicely captures the notion of exposure to different information by different population in a society, and would help in quantifying the extent of social media sites offering selective (or diverse) information to their users. | On Quantifying Knowledge Segregation in Society | 9,417 |
The difficulty of getting medical treatment is one of major livelihood issues in China. Since patients lack prior knowledge about the spatial distribution and the capacity of hospitals, some hospitals have abnormally high or sporadic population densities. This paper presents a new model for estimating the spatiotemporal population density in each hospital based on location-based service (LBS) big data, which would be beneficial to guiding and dispersing outpatients. To improve the estimation accuracy, several approaches are proposed to denoise the LBS data and classify people by detecting their various behaviors. In addition, a long short-term memory (LSTM) based deep learning is presented to predict the trend of population density. By using Baidu large-scale LBS logs database, we apply the proposed model to 113 hospitals in Beijing, P. R. China, and constructed an online hospital recommendation system which can provide users with a hospital rank list basing the real-time population density information and the hospitals' basic information such as hospitals' levels and their distances. We also mine several interesting patterns from these LBS logs by using our proposed system. | Population Density-based Hospital Recommendation with Mobile LBS Big
Data | 9,418 |
The information release behavior of WeChat users is influenced by many factors, and studying the rules of the behavior of users in WeChat can provide theoretical help for the dynamic research of mobile social network users. By crawling WeChat moments information of nine users within 5 years, we used the human behavioral dynamics system to analyze users' behavior. The results show that the information distribution behavior of WeChat users is consistent with the power-law distribution for a certain period of time. Meanwhile, there is an anti-memory characteristic in information release behavior of WeChat users, which is significantly different from other user behavior patterns in online social networks. The results of the study provide theoretical support for the further study of information release behavior of Wechat users. | Research on Human Dynamics of Information Release of WeChat Users | 9,419 |
Given a large graph, how can we determine similarity between nodes in a fast and accurate way? Random walk with restart (RWR) is a popular measure for this purpose and has been exploited in numerous data mining applications including ranking, anomaly detection, link prediction, and community detection. However, previous methods for computing exact RWR require prohibitive storage sizes and computational costs, and alternative methods which avoid such costs by computing approximate RWR have limited accuracy. In this paper, we propose TPA, a fast, scalable, and highly accurate method for computing approximate RWR on large graphs. TPA exploits two important properties in RWR: 1) nodes close to a seed node are likely to be revisited in following steps due to block-wise structure of many real-world graphs, and 2) RWR scores of nodes which reside far from the seed node are proportional to their PageRank scores. Based on these two properties, TPA divides approximate RWR problem into two subproblems called neighbor approximation and stranger approximation. In the neighbor approximation, TPA estimates RWR scores of nodes close to the seed based on scores of few early steps from the seed. In the stranger approximation, TPA estimates RWR scores for nodes far from the seed using their PageRank. The stranger and neighbor approximations are conducted in the preprocessing phase and the online phase, respectively. Through extensive experiments, we show that TPA requires up to 3.5x less time with up to 40x less memory space than other state-of-the-art methods for the preprocessing phase. In the online phase, TPA computes approximate RWR up to 30x faster than existing methods while maintaining high accuracy. | TPA: Fast, Scalable, and Accurate Method for Approximate Random Walk
with Restart on Billion Scale Graphs | 9,420 |
Online social media (OSM) has a enormous influence in today's world. Some individuals view OSM as fertile ground for abuse and use it to disseminate misinformation and political propaganda, slander competitors, and spread spam. The crowdturfing industry employs large numbers of bots and human workers to manipulate OSM and misrepresent public opinion. The detection of online discussion topics manipulated by OSM \emph{abusers} is an emerging issue attracting significant attention. In this paper, we propose an approach for quantifying the authenticity of online discussions based on the similarity of OSM accounts participating in the discussion to known abusers and legitimate accounts. Our method uses several similarity functions for the analysis and classification of OSM accounts. The proposed methods are demonstrated using Twitter data collected for this study and previously published \emph{Arabic honeypot dataset}. The former includes manually labeled accounts and abusers who participated in crowdturfing platforms. Evaluation of the topic's authenticity, derived from account similarity functions, shows that the suggested approach is effective for discriminating between topics that were strongly promoted by abusers and topics that attracted authentic public interest. | Has the Online Discussion Been Manipulated? Quantifying Online
Discussion Authenticity within Online Social Media | 9,421 |
We consider the problem of identifying the topology of a weighted, undirected network $\mathcal G$ from observing snapshots of multiple independent consensus dynamics. Specifically, we observe the opinion profiles of a group of agents for a set of $M$ independent topics and our goal is to recover the precise relationships between the agents, as specified by the unknown network $\mathcal G$. In order to overcome the under-determinacy of the problem at hand, we leverage concepts from spectral graph theory and convex optimization to unveil the underlying network structure. More precisely, we formulate the network inference problem as a convex optimization that seeks to endow the network with certain desired properties -- such as sparsity -- while being consistent with the spectral information extracted from the observed opinions. This is complemented with theoretical results proving consistency as the number $M$ of topics grows large. We further illustrate our method by numerical experiments, which showcase the effectiveness of the technique in recovering synthetic and real-world networks. | Network Inference from Consensus Dynamics | 9,422 |
The popularity of Twitter for information discovery, coupled with the automatic shortening of URLs to save space, given the 140 character limit, provides cyber criminals with an opportunity to obfuscate the URL of a malicious Web page within a tweet. Once the URL is obfuscated the cyber criminal can lure a user to click on it with enticing text and images before carrying out a cyber attack using a malicious Web server. This is known as a drive-by- download. In a drive-by-download a user's computer system is infected while interacting with the malicious endpoint, often without them being made aware, the attack has taken place. An attacker can gain control of the system by exploiting unpatched system vulnerabilities and this form of attack currently represents one of the most common methods employed. In this paper, we build a machine learning model using machine activity data and tweet meta data to move beyond post-execution classification of such URLs as malicious, to predict a URL will be malicious with 99.2% F-measure (using 10-fold cross validation) and 83.98% (using an unseen test set) at 1 second into the interaction with the URL. Thus providing a basis from which to kill the connection to the server before an attack has completed and proactively blocking and preventing an attack, rather than reacting and repairing at a later date. | Real Time Prediction of Drive by Download Attacks on Twitter | 9,423 |
Manual annotations are a prerequisite for many applications of machine learning. However, weaknesses in the annotation process itself are easy to overlook. In particular, scholars often choose what information to give to annotators without examining these decisions empirically. For subjective tasks such as sentiment analysis, sarcasm, and stance detection, such choices can impact results. Here, for the task of political stance detection on Twitter, we show that providing too little context can result in noisy and uncertain annotations, whereas providing too strong a context may cause it to outweigh other signals. To characterize and reduce these biases, we develop ConStance, a general model for reasoning about annotations across information conditions. Given conflicting labels produced by multiple annotators seeing the same instances with different contexts, ConStance simultaneously estimates gold standard labels and also learns a classifier for new instances. We show that the classifier learned by ConStance outperforms a variety of baselines at predicting political stance, while the model's interpretable parameters shed light on the effects of each context. | ConStance: Modeling Annotation Contexts to Improve Stance Classification | 9,424 |
The present work proposes the use of social media as a tool for better understanding the relationship between a journalists' social network and the content they produce. Specifically, we ask: what is the relationship between the ideological leaning of a journalist's social network on Twitter and the news content he or she produces? Using a novel dataset linking over 500,000 news articles produced by 1,000 journalists at 25 different news outlets, we show a modest correlation between the ideologies of who a journalist follows on Twitter and the content he or she produces. This research can provide the basis for greater self-reflection among media members about how they source their stories and how their own practice may be colored by their online networks. For researchers, the findings furnish a novel and important step in better understanding the construction of media stories and the mechanics of how ideology can play a role in shaping public information. | Exploring the Ideological Nature of Journalists' Social Networks on
Twitter and Associations with News Story Content | 9,425 |
Increasingly, software developers are using a wide array of social collaborative platforms for software development and learning. In this work, we examined the similarities in developer's interests within and across GitHub and Stack Overflow. Our study finds that developers share common interests in GitHub and Stack Overflow, on average, 39% of the GitHub repositories and Stack Overflow questions that a developer had participated fall in the common interests. Also, developers do share similar interests with other developers who co-participated activities in the two platforms. In particular, developers who co-commit and co-pull-request same GitHub repositories and co-answer same Stack Overflow questions, share more common interests compare to other developers who co-participate in other platform activities. | GitHub and Stack Overflow: Analyzing Developer Interests Across Multiple
Social Collaborative Platforms | 9,426 |
Recently, \textit{diffusion history inference} has become an emerging research topic due to its great benefits for various applications, whose purpose is to reconstruct the missing histories of information diffusion traces according to incomplete observations. The existing methods, however, often focus only on single information diffusion trace, while in a real-world social network, there often coexist multiple information diffusions over the same network. In this paper, we propose a novel approach called Collaborative Inference Model (CIM) for the problem of the inference of coexisting information diffusions. By exploiting the synergism between the coexisting information diffusions, CIM holistically models multiple information diffusions as a sparse 4th-order tensor called Coexisting Diffusions Tensor (CDT) without any prior assumption of diffusion models, and collaboratively infers the histories of the coexisting information diffusions via a low-rank approximation of CDT with a fusion of heterogeneous constraints generated from additional data sources. To improve the efficiency, we further propose an optimal algorithm called Time Window based Parallel Decomposition Algorithm (TWPDA), which can speed up the inference without compromise on the accuracy by utilizing the temporal locality of information diffusions. The extensive experiments conducted on real world datasets and synthetic datasets verify the effectiveness and efficiency of CIM and TWPDA. | Collaborative Inference of Coexisting Information Diffusions | 9,427 |
Opinion mining and demographic attribute inference have many applications in social science. In this paper, we propose models to infer daily joint probabilities of multiple latent attributes from Twitter data, such as political sentiment and demographic attributes. Since it is costly and time-consuming to annotate data for traditional supervised classification, we instead propose scalable Learning from Label Proportions (LLP) models for demographic and opinion inference using U.S. Census, national and state political polls, and Cook partisan voting index as population level data. In LLP classification settings, the training data is divided into a set of unlabeled bags, where only the label distribution in of each bag is known, removing the requirement of instance-level annotations. Our proposed LLP model, Weighted Label Regularization (WLR), provides a scalable generalization of prior work on label regularization to support weights for samples inside bags, which is applicable in this setting where bags are arranged hierarchically (e.g., county-level bags are nested inside of state-level bags). We apply our model to Twitter data collected in the year leading up to the 2016 U.S. presidential election, producing estimates of the relationships among political sentiment and demographics over time and place. We find that our approach closely tracks traditional polling data stratified by demographic category, resulting in error reductions of 28-44% over baseline approaches. We also provide descriptive evaluations showing how the model may be used to estimate interactions among many variables and to identify linguistic temporal variation, capabilities which are typically not feasible using traditional polling methods. | Mining the Demographics of Political Sentiment from Twitter Using
Learning from Label Proportions | 9,428 |
In everyday life, we often observe unusually frequent interactions among people before or during important events, e.g., people send/receive more greetings to/from their friends on holidays than regular days. We also observe that some videos or hashtags suddenly go viral through people's sharing on online social networks (OSNs). Do these seemingly different phenomena share a common structure? All these phenomena are associated with sudden surges of user interactions in networks, which we call "bursts" in this work. We uncover that the emergence of a burst is accompanied with the formation of triangles in some properly defined networks. This finding motivates us to propose a new and robust method to detect bursts on OSNs. We first introduce a new measure, "triadic cardinality distribution", corresponding to the fractions of nodes with different numbers of triangles, i.e., triadic cardinalities, in a network. We show that this distribution not only changes when a burst occurs, but it also has a robustness property that it is immunized against common spamming social-bot attacks. Hence, by tracking triadic cardinality distributions, we can more reliably detect bursts than simply counting interactions on OSNs. To avoid handling massive activity data generated by OSN users during the triadic tracking, we design an efficient "sample-estimate" framework to provide maximum likelihood estimate on the triadic cardinality distribution. We propose several sampling methods, and provide insights about their performance difference, through both theoretical analysis and empirical experiments on real world networks. | Tracking Triadic Cardinality Distributions for Burst Detection in
High-Speed Multigraph Streams | 9,429 |
Social media mining has become one of the most popular research areas in Big Data with the explosion of social networking information from Facebook, Twitter, LinkedIn, Weibo and so on. Understanding and representing the structure of a social network is a key in social media mining. In this paper, we propose the Motif Iteration Model (MIM) to represent the structure of a social network. As the name suggested, the new model is based on iteration of basic network motifs. In order to better show the properties of the model, a heuristic and greedy algorithm called Vertex Reordering and Arranging (VRA) is proposed by studying the adjacency matrix of the three-vertex undirected network motifs. The algorithm is for mapping from the adjacency matrix of a network to a binary image, it shows a new perspective of network structure visualization. In summary, this model provides a useful approach towards building link between images and networks and offers a new way of representing the structure of a social network. | Motif Iteration Model for Network Representation | 9,430 |
Badges are a common, and sometimes the only, method of incentivizing users to perform certain actions on online sites. However, due to many competing factors influencing user temporal dynamics, it is difficult to determine whether the badge had (or will have) the intended effect or not. In this paper, we introduce two complementary approaches for determining badge influence on users. In the first one, we cluster users' temporal traces (represented with point processes) and apply covariates (user features) to regularize results. In the second approach, we first classify users' temporal traces with a novel statistical framework, and then we refine the classification results with a semi-supervised clustering of covariates. Outcomes obtained from an evaluation on synthetic datasets and experiments on two badges from a popular Q&A platform confirm that it is possible to validate, characterize and to some extent predict users affected by the badge. | Determining Impact of Social Media Badges through Joint Clustering of
Temporal Traces and User Features | 9,431 |
Hurricane Sandy was one of the deadliest and costliest of hurricanes over the past few decades. Many states experienced significant power outage, however many people used social media to communicate while having limited or no access to traditional information sources. In this study, we explored the evolution of various communication patterns using machine learning techniques and determined user concerns that emerged over the course of Hurricane Sandy. The original data included ~52M tweets coming from ~13M users between October 14, 2012 and November 12, 2012. We run topic model on ~763K tweets from top 4,029 most frequent users who tweeted about Sandy at least 100 times. We identified 250 well-defined communication patterns based on perplexity. Conversations of most frequent and relevant users indicate the evolution of numerous storm-phase (warning, response, and recovery) specific topics. People were also concerned about storm location and time, media coverage, and activities of political leaders and celebrities. We also present each relevant keyword that contributed to one particular pattern of user concerns. Such keywords would be particularly meaningful in targeted information spreading and effective crisis communication in similar major disasters. Each of these words can also be helpful for efficient hash-tagging to reach target audience as needed via social media. The pattern recognition approach of this study can be used in identifying real time user needs in future crises. | Crisis Communication Patterns in Social Media during Hurricane Sandy | 9,432 |
In a large social network whose members harbor binary sentiments towards an issue, we investigate the asymptotic accuracy of sentiment detection. We model the user sentiments by an Ising Markov random field model and allow the user sentiments to be biased by an external influence. We consider a general supermajority sentiment detection problem and show that the detection accuracy is affected by the network structure, its parameters, as well as the external influence level. | Supermajority Sentiment Detection with External Influence in Large
Social Networks | 9,433 |
El uso de las redes sociales tiene consecuencias positivas o negativas en la participaci\'on ciudadana? La gran parte de los intentos por responder a esta pregunta incluyen datos de la opini\'on p\'ublica de los Estados Unidos, por lo que nosotros ofrecemos un estudio sobre un caso significativo de M\'exico, donde un candidato independiente utiliz\'o las redes sociales para comunicarse con el p\'ublico y rehuy\'o de los medios de comunicaci\'on tradicionales. Dicho candidato, conocido como "El Bronco", gan\'o la carrera por la gubernatura del estado al derrotar a los candidatos de los partidos tradicionales. Adem\'as, gener\'o una participaci\'on ciudadana que se ha mantenido m\'as all\'a del d\'ia de las elecciones. En nuestra investigaci\'on analizamos m\'as de 750 mil mensajes, comentarios y respuestas durante m\'as de tres a\~nos de interacciones en la p\'agina p\'ublica de Facebook de "El Bronco". Examinamos y demostramos que las redes sociales pueden utilizarse para dar cabida a una gran cantidad de interacciones ciudadanas sobre la vida p\'ublica m\'as all\'a de un acontecimiento pol\'itico. Does social media use have a positive or negative impact on civic engagement? The "slacktivism hypothesis" holds that if citizens use social media for political conversation, those conversations will be fleeting and vapid. Most attempts to answer this question involve public opinion data from the United States, so we offer an examination of an important case from Mexico, where an independent candidate used social media to communicate with the public and eschewed traditional media outlets. He won the race for state governor, defeating candidates from traditional parties and triggering sustained public engagement beyond election day. In our investigation, we analyze over 750,000 posts, comments, and replies over three years of conversations on the Facebook page of "El Bronco". | Redes sociales, participación ciudadana y la hipótesis del
slacktivismo: lecciones del caso de "El Bronco" / Social Media, Civic
Engagement, and the Slacktivism Hypothesis: Lessons from Mexico's "El Bronco" | 9,434 |
In this paper, we address the fundamental statistical question: how can you assess the power of an A/B test when the units in the study are exposed to interference? This question is germane to many scientific and industrial practitioners that rely on A/B testing in environments where control over interference is limited. We begin by proving that interference has a measurable effect on its sensitivity, or power. We quantify the power of an A/B test of equality of means as a function of the number of exposed individuals under any interference mechanism. We further derive a central limit theorem for the number of exposed individuals under a simple Bernoulli switching interference mechanism. Based on these results, we develop a strategy to estimate the power of an A/B test when actors experience interference according to an observed network model. We demonstrate how to leverage this theory to estimate the power of an A/B test on units sharing any network relationship, and highlight the utility of our method on two applications - a Facebook friendship network as well as a large Twitter follower network. These results yield, for the first time, the capacity to understand how to design an A/B test to detect, with a specified confidence, a fixed measurable treatment effect when the A/B test is conducted under interference driven by networks. | The power of A/B testing under interference | 9,435 |
Why is a given node in a time-evolving graph ($t$-graph) marked as an anomaly by an off-the-shelf detection algorithm? Is it because of the number of its outgoing or incoming edges, or their timings? How can we best convince a human analyst that the node is anomalous? Our work aims to provide succinct, interpretable, and simple explanations of anomalous behavior in $t$-graphs (communications, IP-IP interactions, etc.) while respecting the limited attention of human analysts. Specifically, we extract key features from such graphs, and propose to output a few pair (scatter) plots from this feature space which "best" explain known anomalies. To this end, our work has four main contributions: (a) problem formulation: we introduce an "analyst-friendly" problem formulation for explaining anomalies via pair plots, (b) explanation algorithm: we propose a plot-selection objective and the LookOut algorithm to approximate it with optimality guarantees, (c) generality: our explanation algorithm is both domain- and detector-agnostic, and (d) scalability: we show that LookOut scales linearly on the number of edges of the input graph. Our experiments show that LookOut performs near-ideally in terms of maximizing explanation objective on several real datasets including Enron e-mail and DBLP coauthorship. Furthermore, LookOut produces fast, visually interpretable and intuitive results in explaining "ground-truth" anomalies from Enron, DBLP and LBNL (computer network) data. | LookOut on Time-Evolving Graphs: Succinctly Explaining Anomalies from
Any Detector | 9,436 |
In this paper, we describe {\sc quantitative graph theory} and argue it is a new graph-theoretical branch in network science, however, with significant different features compared to classical graph theory. The main goal of quantitative graph theory is the structural quantification of information contained in complex networks by employing a {\it measurement approach} based on numerical invariants and comparisons. Furthermore, the methods as well as the networks do not need to be deterministic but can be statistic. As such this complements the field of classical graph theory, which is descriptive and deterministic in nature. We provide examples of how quantitative graph theory can be used for novel applications in the context of the overarching concept network science. | Quantitative Graph Theory: A new branch of graph theory and in network
science | 9,437 |
Even democracies endowed with the most active free press struggle to maintain the diversity of news coverage. Consolidation and market forces may cause only a few dominant players to control the news cycle. Editorial policies may be biased by corporate ownership relations, narrowing news coverage and focus. To an increasing degree, this problem also applies to social media news distribution, since it is subject to the same socio-economic drivers. To study the effects of consolidation and ownership on news diversity, we model the diversity of Chilean coverage on the basis of ownership records and social media data. We create similarity networks of news outlets on the basis of their ownership and the topics they cover. We then examine the relationships between the topology of ownership networks and content similarity to characterize how ownership affects news coverage. A network analysis reveals that Chilean media is highly concentrated both in terms of ownership as well as in terms of topics covered. Our method can be used to determine which groups of outlets and ownership exert the greatest influence on news coverage. | Power Structure in Chilean News Media | 9,438 |
In this paper, we propose an approach for the detection of clickbait posts in online social media (OSM). Clickbait posts are short catchy phrases that attract a user's attention to click to an article. The approach is based on a machine learning (ML) classifier capable of distinguishing between clickbait and legitimate posts published in OSM. The suggested classifier is based on a variety of features, including image related features, linguistic analysis, and methods for abuser detection. In order to evaluate our method, we used two datasets provided by Clickbait Challenge 2017. The best performance obtained by the ML classifier was an AUC of 0.8, an accuracy of 0.812, precision of 0.819, and recall of 0.966. In addition, as opposed to previous studies, we found that clickbait post titles are statistically significant shorter than legitimate post titles. Finally, we found that counting the number of formal English words in the given content is useful for clickbait detection. | Detecting Clickbait in Online Social Media: You Won't Believe How We Did
It | 9,439 |
Customer loyalty is crucial for internet services since retaining users of a service to ensure the staying time of the service is of significance for increasing revenue. It demands the retention of customers to be high enough to meet the needs for yielding profit for the internet servers. Besides, the growing of rich purchasing interaction feedback helps in uncovering the inner mechanism of purchasing intent of the customers. In this work, we exploit the rich interaction data of user to build a customers retention evaluation model focusing on the return time of a user to a product. Three aspects, namely the consilience between user and product, the sensitivity of the user to price and the external influence the user might receive, are promoted to effect the purchase intents, which are jointly modeled by a probability model based on Cox's proportional hazard approach. The hazard based model provides benefits in the dynamics in user retention and it can conveniently incorporate covariates in the model. Extensive experiments on real world purchasing data have demonstrated the superiority of the proposed model over state-of-the-art algorithms. | A Price Driven Hazard Approach to User Retention | 9,440 |
Social media are now a routine part of political campaigns all over the world. However, studies of the impact of campaigning on social platform have thus far been limited to cross-sectional datasets from one election period which are vulnerable to unobserved variable bias. Hence empirical evidence on the effectiveness of political social media activity is thin. We address this deficit by analysing a novel panel dataset of political Twitter activity in the 2015 and 2017 elections in the United Kingdom. We find that Twitter based campaigning does seem to help win votes, a finding which is consistent across a variety of different model specifications including a first difference regression. The impact of Twitter use is small in absolute terms, though comparable with that of campaign spending. Our data also support the idea that effects are mediated through other communication channels, hence challenging the relevance of engaging in an interactive fashion. | Does Campaigning on Social Media Make a Difference? Evidence from
candidate use of Twitter during the 2015 and 2017 UK Elections | 9,441 |
This article presents a novel approach for learning low-dimensional distributed representations of users in online social networks. Existing methods rely on the network structure formed by the social relationships among users to extract these representations. However, the network information can be obsolete, incomplete or dynamically changing. In addition, in some cases, it can be prohibitively expensive to get the network information. Therefore, we propose an alternative approach based on observations from topics being talked on in social networks. We utilise the time information of users adopting topics in order to embed them in a real-valued vector space. Through extensive experiments, we investigate the properties of the representations learned and their efficacy in preserving information about link structure among users. We also evaluate the representations in two different prediction tasks, namely, predicting most likely future adopters of a topic and predicting the geo-location of users. Experiments to validate the proposed methods are performed on a large-scale social network extracted from Twitter, consisting of about 7.7 million users and their activity on around 3.6 million topics over a month-long period. | Learning User Representations in Online Social Networks using Temporal
Dynamics of Information Diffusion | 9,442 |
In parallel to the increase of various mobile technologies, the mobile social network (MSN) service has brought us into an era of mobile social big data, where people are creating new social data every second and everywhere. It is of vital importance for businesses, government, and institutes to understand how peoples' behaviors in the online cyberspace can affect the underlying computer network, or their offline behaviors at large. To study this problem, we collect a dataset from WeChat Moments, called WeChatNet, which involves 25,133,330 WeChat users with 246,369,415 records of link reposting on their pages. We revisit three network applications based on the data analytics over WeChatNet, i.e., the information dissemination in mobile cellular networks, the network traffic prediction in backbone networks, and the mobile population distribution projection. Meanwhile, we discuss the potential research opportunities for developing new applications using the released dataset. | Mobile Social Big Data: WeChat Moments Dataset, Network Applications,
and Opportunities | 9,443 |
Social balance theory describes allowable and forbidden configurations of the topologies of signed directed social appraisal networks. In this paper, we propose two discrete-time dynamical systems that explain how an appraisal network \textcolor{blue}{converges to} social balance from an initially unbalanced configuration. These two models are based on two different socio-psychological mechanisms respectively: the homophily mechanism and the influence mechanism. Our main theoretical contribution is a comprehensive analysis for both models in three steps. First, we establish the well-posedness and bounded evolution of the interpersonal appraisals. Second, we fully characterize the set of equilibrium points; for both models, each equilibrium network is composed by an arbitrary number of complete subgraphs satisfying structural balance. Third, we establish the equivalence among three distinct properties: non-vanishing appraisals, convergence to all-to-all appraisal networks, and finite-time achievement of social balance. In addition to theoretical analysis, Monte Carlo validations illustrates how the non-vanishing appraisal condition holds for generic initial conditions in both models. Moreover, numerical comparison between the two models indicate that the homophily-based model might be a more universal explanation for the formation of social balance. Finally, adopting the homophily-based model, we present numerical results on the mediation and globalization of local conflicts, the competition for allies, and the asymptotic formation of a single versus two factions. | Dynamic Social Balance and Convergent Appraisals via Homophily and
Influence Mechanisms | 9,444 |
A sequence of social sensors estimate an unknown parameter (modeled as a state of nature) by performing Bayesian Social Learning, and myopically optimize individual reward functions. The decisions of the social sensors contain quantized information about the underlying state. How should a fusion center dynamically incentivize the social sensors for acquiring information about the underlying state? This paper presents five results. First, sufficient conditions on the model parameters are provided under which the optimal policy for the fusion center has a threshold structure. The optimal policy is determined in closed form, and is such that it switches between two exactly specified incentive policies at the threshold. Second, it is shown that the optimal incentive sequence is a sub-martingale, i.e, the optimal incentives increase on average over time. Third, it is shown that it is possible for the fusion center to learn the true state asymptotically by employing a sub-optimal policy; in other words, controlled information fusion with social sensors can be consistent. Fourth, uniform bounds on the average additional cost incurred by the fusion center for employing a sub-optimal policy are provided. This characterizes the trade-off between the cost of information acquisition and consistency for the fusion center. Finally, when it is sufficient to estimate the state with a degree of confidence, uniform bounds on the budget saved by employing policies that guarantee state estimation in finite time are provided. | Controlled Sequential Information Fusion with Social Sensors | 9,445 |
Following the trend of data trading and data publishing, many online social networks have enabled potentially sensitive data to be exchanged or shared on the web. As a result, users' privacy could be exposed to malicious third parties since they are extremely vulnerable to de-anonymization attacks, i.e., the attacker links the anonymous nodes in the social network to their real identities with the help of background knowledge. Previous work in social network de-anonymization mostly focuses on designing accurate and efficient de-anonymization methods. We study this topic from a different perspective and attempt to investigate the intrinsic relation between the attacker's knowledge and the expected de-anonymization gain. One common intuition is that the more auxiliary information the attacker has, the more accurate de-anonymization becomes. However, their relation is much more sophisticated than that. To simplify the problem, we attempt to quantify background knowledge and de-anonymization gain under several assumptions. Our theoretical analysis and simulations on synthetic and real network data show that more background knowledge may not necessarily lead to more de-anonymization gain in certain cases. Though our analysis is based on a few assumptions, the findings still leave intriguing implications for the attacker to make better use of the background knowledge when performing de-anonymization, and for the data owners to better measure the privacy risk when releasing their data to third parties. | Social Network De-anonymization: More Adversarial Knowledge, More Users
Re-Identified? | 9,446 |
Motivated by the study of controlling (curing) epidemics, we consider the spread of an SI process on a known graph, where we have a limited budget to use to transition infected nodes back to the susceptible state (i.e., to cure nodes). Recent work has demonstrated that under perfect and instantaneous information (which nodes are/are not infected), the budget required for curing a graph precisely depends on a combinatorial property called the CutWidth. We show that this assumption is in fact necessary: even a minor degradation of perfect information, e.g., a diagnostic test that is 99% accurate, drastically alters the landscape. Infections that could previously be cured in sublinear time now may require exponential time, or orderwise larger budget to cure. The crux of the issue comes down to a tension not present in the full information case: if a node is suspected (but not certain) to be infected, do we risk wasting our budget to try to cure an uninfected node, or increase our certainty by longer observation, at the risk that the infection spreads further? Our results present fundamental, algorithm-independent bounds that tradeoff budget required vs. uncertainty. | The Cost of Uncertainty in Curing Epidemics | 9,447 |
Polarization in American politics has been extensively documented and analyzed for decades, and the phenomenon became all the more apparent during the 2016 presidential election, where Trump and Clinton depicted two radically different pictures of America. Inspired by this gaping polarization and the extensive utilization of Twitter during the 2016 presidential campaign, in this paper we take the first step in measuring polarization in social media and we attempt to predict individuals' Twitter following behavior through analyzing ones' everyday tweets, profile images and posted pictures. As such, we treat polarization as a classification problem and study to what extent Trump followers and Clinton followers on Twitter can be distinguished, which in turn serves as a metric of polarization in general. We apply LSTM to processing tweet features and we extract visual features using the VGG neural network. Integrating these two sets of features boosts the overall performance. We are able to achieve an accuracy of 69%, suggesting that the high degree of polarization recorded in the literature has started to manifest itself in social media as well. | How Polarized Have We Become? A Multimodal Classification of Trump
Followers and Clinton Followers | 9,448 |
In many instances one may want to gain situational awareness in an environment by monitoring the content of local social media users. Often the challenge is how to build a set of users from a target location. Here we introduce a method for building such a set of users by using an \emph{expand-classify} approach which begins with a small set of seed users from the target location and then iteratively collects their neighbors and then classifies their locations. We perform this classification using maximum likelihood estimation on a factor graph model which incorporates features of the user profile and also social network connections. We show that maximum likelihood estimation reduces to solving a minimum cut problem on an appropriately defined graph. We are able to obtain several thousand users within a few hours for many diverse locations using our approach. Using geo-located data, we find that our approach typically achieves good accuracy for population centers with less than 500,000 inhabitants, while for larger cities performance degrades somewhat. We also find that our approach is able to collect many more users with higher accuracy than existing search methods. Finally, we show that by studying the content of location specific users obtained with our approach, we can identify the onset of significant social unrest in locations such as the Philippines. | Building a Location-Based Set of Social Media Users | 9,449 |
This paper considers the problem of randomized influence maximization over a Markovian graph process: given a fixed set of nodes whose connectivity graph is evolving as a Markov chain, estimate the probability distribution (over this fixed set of nodes) that samples a node which will initiate the largest information cascade (in expectation). Further, it is assumed that the sampling process affects the evolution of the graph i.e. the sampling distribution and the transition probability matrix are functionally dependent. In this setup, recursive stochastic optimization algorithms are presented to estimate the optimal sampling distribution for two cases: 1) transition probabilities of the graph are unknown but, the graph can be observed perfectly 2) transition probabilities of the graph are known but, the graph is observed in noise. These algorithms consist of a neighborhood size estimation algorithm combined with a variance reduction method, a Bayesian filter and a stochastic gradient algorithm. Convergence of the algorithms are established theoretically and, numerical results are provided to illustrate how the algorithms work. | Influence Maximization over Markovian Graphs: A Stochastic Optimization
Approach | 9,450 |
To deal with the sheer volume of information and gain competitive advantage, the news industry has started to explore and invest in news automation. In this paper, we present Reuters Tracer, a system that automates end-to-end news production using Twitter data. It is capable of detecting, classifying, annotating, and disseminating news in real time for Reuters journalists without manual intervention. In contrast to other similar systems, Tracer is topic and domain agnostic. It has a bottom-up approach to news detection, and does not rely on a predefined set of sources or subjects. Instead, it identifies emerging conversations from 12+ million tweets per day and selects those that are news-like. Then, it contextualizes each story by adding a summary and a topic to it, estimating its newsworthiness, veracity, novelty, and scope, and geotags it. Designing algorithms to generate news that meets the standards of Reuters journalists in accuracy and timeliness is quite challenging. But Tracer is able to achieve competitive precision, recall, timeliness, and veracity on news detection and delivery. In this paper, we reveal our key algorithm designs and evaluations that helped us achieve this goal, and lessons learned along the way. | Reuters Tracer: Toward Automated News Production Using Large Scale
Social Media Data | 9,451 |
Social networks are the major routes for most individuals to exchange their opinions about new products, social trends and political issues via their interactions. It is often of significant importance to figure out who initially diffuses the information, ie, finding a rumor source or a trend setter. It is known that such a task is highly challenging and the source detection probability cannot be beyond 31 percent for regular trees, if we just estimate the source from a given diffusion snapshot. In practice, finding the source often entails the process of querying that asks "Are you the rumor source?" or "Who tells you the rumor?" that would increase the chance of detecting the source. In this paper, we consider two kinds of querying: (a) simple batch querying and (b) interactive querying with direction under the assumption that queries can be untruthful with some probability. We propose estimation algorithms for those queries, and quantify their detection performance and the amount of extra budget due to untruthfulness, analytically showing that querying significantly improves the detection performance. We perform extensive simulations to validate our theoretical findings over synthetic and real-world social network topologies. | Rumor Source Detection under Querying with Untruthful Answers | 9,452 |
Most of the mammal species hold polygynous mating systems. The majority of the marriage systems of mankind were also polygynous over civilized history, however, socially imposed monogamy gradually prevails throughout the world. This is difficult to understand because those mostly influential in society are themselves benefitted from polygyny. Actually, the puzzle of monogamous marriage could be explained by a simple mechanism, which lies in the sexual selection dynamics of civilized human societies, driven by wealth redistribution. The discussions in this paper are mainly based on the approach of social computing, with a combination of both experimental and analytical analysis. | Social Computing Based Analysis on Monogamous Marriage Puzzle of Human | 9,453 |
Network embedding assigns nodes in a network to low-dimensional representations and effectively preserves the network structure. Recently, a significant amount of progresses have been made toward this emerging network analysis paradigm. In this survey, we focus on categorizing and then reviewing the current development on network embedding methods, and point out its future research directions. We first summarize the motivation of network embedding. We discuss the classical graph embedding algorithms and their relationship with network embedding. Afterwards and primarily, we provide a comprehensive overview of a large number of network embedding methods in a systematic manner, covering the structure- and property-preserving network embedding methods, the network embedding methods with side information and the advanced information preserving network embedding methods. Moreover, several evaluation approaches for network embedding and some useful online resources, including the network data sets and softwares, are reviewed, too. Finally, we discuss the framework of exploiting these network embedding methods to build an effective system and point out some potential future directions. | A Survey on Network Embedding | 9,454 |
Our work considers leveraging crowd signals for detecting fake news and is motivated by tools recently introduced by Facebook that enable users to flag fake news. By aggregating users' flags, our goal is to select a small subset of news every day, send them to an expert (e.g., via a third-party fact-checking organization), and stop the spread of news identified as fake by an expert. The main objective of our work is to minimize the spread of misinformation by stopping the propagation of fake news in the network. It is especially challenging to achieve this objective as it requires detecting fake news with high-confidence as quickly as possible. We show that in order to leverage users' flags efficiently, it is crucial to learn about users' flagging accuracy. We develop a novel algorithm, DETECTIVE, that performs Bayesian inference for detecting fake news and jointly learns about users' flagging accuracy over time. Our algorithm employs posterior sampling to actively trade off exploitation (selecting news that maximize the objective value at a given epoch) and exploration (selecting news that maximize the value of information towards learning about users' flagging accuracy). We demonstrate the effectiveness of our approach via extensive experiments and show the power of leveraging community signals for fake news detection. | Fake News Detection in Social Networks via Crowd Signals | 9,455 |
We can see profile information such as name, description and location in order to know the user on social media. However, this profile information is not always fixed. If there is a change in the user's life, the profile information will be changed. In this study, we focus on user's profile information changes and analyze the timing and reasons for these changes on Twitter. The results indicate that the peak of profile information change occurs in April among Japanese users, but there was no such trend observed for English users throughout the year. Our analysis also shows that English users most frequently change their names on their birthdays, while Japanese users change their names as their Twitter engagement and activities decrease over time. | When Do Users Change Their Profile Information on Twitter? | 9,456 |
Singular Value Decomposition (SVD) is a popular approach in various network applications, such as link prediction and network parameter characterization. Incremental SVD approaches are proposed to process newly changed nodes and edges in dynamic networks. However, incremental SVD approaches suffer from serious error accumulation inevitably due to approximation on incremental updates. SVD restart is an effective approach to reset the aggregated error, but when to restart SVD for dynamic networks is not addressed in literature. In this paper, we propose TIMERS, Theoretically Instructed Maximum-Error-bounded Restart of SVD, a novel approach which optimally sets the restart time in order to reduce error accumulation in time. Specifically, we monitor the margin between reconstruction loss of incremental updates and the minimum loss in SVD model. To reduce the complexity of monitoring, we theoretically develop a lower bound of SVD minimum loss for dynamic networks and use the bound to replace the minimum loss in monitoring. By setting a maximum tolerated error as a threshold, we can trigger SVD restart automatically when the margin exceeds this threshold.We prove that the time complexity of our method is linear with respect to the number of local dynamic changes, and our method is general across different types of dynamic networks. We conduct extensive experiments on several synthetic and real dynamic networks. The experimental results demonstrate that our proposed method significantly outperforms the existing methods by reducing 27% to 42% in terms of the maximum error for dynamic network reconstruction when fixing the number of restarts. Our method reduces the number of restarts by 25% to 50% when fixing the maximum error tolerated. | TIMERS: Error-Bounded SVD Restart on Dynamic Networks | 9,457 |
Twitter, a microblogging service, is todays most popular platform for communication in the form of short text messages, called Tweets. Users use Twitter to publish their content either for expressing concerns on information news or views on daily conversations. When this expression emerges, they are experienced by the worldwide distribution network of users and not only by the interlocutor(s). Depending upon the impact of the tweet in the form of the likes, retweets and percentage of followers increases for the user considering a window of time frame, we compute attention factor for each tweet for the selected user profiles. This factor is used to select the top 1000 Tweets, from each user profile, to form a document. Topic modelling is then applied to this document to determine the intent of the user behind the Tweets. After topics are modelled, the similarity is determined between the BBC news data-set containing the modelled topic, and the user document under evaluation. Finally, we determine the top words for a user which would enable us to find the topics which garnered attention and has been posted recently. The experiment is performed using more than 1.1M Tweets from around 500 Twitter profiles spanning Politics, Entertainment, Sports etc. and hundreds of BBC news articles. The results show that our analysis is efficient enough to enable us to find the topics which would act as a suggestion for users to get higher popularity rating for the user in the future. | TweetIT- Analyzing Topics for Twitter Users to garner Maximum Attention | 9,458 |
Network embedding has recently attracted lots of attentions in data mining. Existing network embedding methods mainly focus on networks with pairwise relationships. In real world, however, the relationships among data points could go beyond pairwise, i.e., three or more objects are involved in each relationship represented by a hyperedge, thus forming hyper-networks. These hyper-networks pose great challenges to existing network embedding methods when the hyperedges are indecomposable, that is to say, any subset of nodes in a hyperedge cannot form another hyperedge. These indecomposable hyperedges are especially common in heterogeneous networks. In this paper, we propose a novel Deep Hyper-Network Embedding (DHNE) model to embed hyper-networks with indecomposable hyperedges. More specifically, we theoretically prove that any linear similarity metric in embedding space commonly used in existing methods cannot maintain the indecomposibility property in hyper-networks, and thus propose a new deep model to realize a non-linear tuplewise similarity function while preserving both local and global proximities in the formed embedding space. We conduct extensive experiments on four different types of hyper-networks, including a GPS network, an online social network, a drug network and a semantic network. The empirical results demonstrate that our method can significantly and consistently outperform the state-of-the-art algorithms. | Structural Deep Embedding for Hyper-Networks | 9,459 |
People are shifting from traditional news sources to online news at an incredibly fast rate. However, the technology behind online news consumption promotes content that confirms the users' existing point of view. This phenomenon has led to polarization of opinions and intolerance towards opposing views. Thus, a key problem is to model information filter bubbles on social media and design methods to eliminate them. In this paper, we use a machine-learning approach to learn a liberal-conservative ideology space on Twitter, and show how we can use the learned latent space to tackle the filter bubble problem. We model the problem of learning the liberal-conservative ideology space of social media users and media sources as a constrained non-negative matrix-factorization problem. Our model incorporates the social-network structure and content-consumption information in a joint factorization problem with shared latent factors. We validate our model and solution on a real-world Twitter dataset consisting of controversial topics, and show that we are able to separate users by ideology with over 90% purity. When applied to media sources, our approach estimates ideology scores that are highly correlated (Pearson correlation 0.9) with ground-truth ideology scores. Finally, we demonstrate the utility of our model in real-world scenarios, by illustrating how the learned ideology latent space can be used to develop exploratory and interactive interfaces that can help users in diffusing their information filter bubble. | Joint Non-negative Matrix Factorization for Learning Ideological Leaning
on Twitter | 9,460 |
During the 2016 US elections Twitter experienced unprecedented levels of propaganda and fake news through the collaboration of bots and hired persons, the ramifications of which are still being debated. This work proposes an approach to identify the presence of organized behavior in tweets. The Random Forest, Support Vector Machine, and Logistic Regression algorithms are each used to train a model with a data set of 850 records consisting of 299 features extracted from tweets gathered during the 2016 US presidential election. The features represent user and temporal synchronization characteristics to capture coordinated behavior. These models are trained to classify tweet sets among the categories: organic vs organized, political vs non-political, and pro-Trump vs pro-Hillary vs neither. The random forest algorithm performs better with greater than 95% average accuracy and f-measure scores for each category. The most valuable features for classification are identified as user based features, with media use and marking tweets as favorite to be the most dominant. | Organized Behavior Classification of Tweet Sets using Supervised
Learning Methods | 9,461 |
Due to the flexibility in modelling data heterogeneity, heterogeneous information network (HIN) has been adopted to characterize complex and heterogeneous auxiliary data in recommender systems, called HIN based recommendation. It is challenging to develop effective methods for HIN based recommendation in both extraction and exploitation of the information from HINs. Most of HIN based recommendation methods rely on path based similarity, which cannot fully mine latent structure features of users and items. In this paper, we propose a novel heterogeneous network embedding based approach for HIN based recommendation, called HERec. To embed HINs, we design a meta-path based random walk strategy to generate meaningful node sequences for network embedding. The learned node embeddings are first transformed by a set of fusion functions, and subsequently integrated into an extended matrix factorization (MF) model. The extended MF model together with fusion functions are jointly optimized for the rating prediction task. Extensive experiments on three real-world datasets demonstrate the effectiveness of the HERec model. Moreover, we show the capability of the HERec model for the cold-start problem, and reveal that the transformed embedding information from HINs can improve the recommendation performance. | Heterogeneous Information Network Embedding for Recommendation | 9,462 |
In the light of the need to achieve a ranking which is understood by all tennis supporters, the ATP ranking is exposed to constant complaints from players and at the same time exposes new players to be benefited with a good tournament to be able to start progressing in their careers. Moreover, the ATP ranking is not powerful enough to predict with certainty who will be the winner of a match if we are based solely on the positions. In order to combat these problems, the idea of creating a new ranking that can indicate what are the real chances of victory of a player before the start of a new tournament arises. Based on the PageRank method, generated by Larry Page and Sergey Brin, we created a new ranking that specifically uses the characteristics of the tournament to generate data. Based on a history of 40,000 matches, we intend to evaluate how the new method is performed as compared to other existing rankings in order to analyze if we really achieved an improved and real reflection. Once we have obtained the ranking, we intend to evaluate, taking a sample game, the ranking of the players that dispute it and the characteristics of such game to be able to indicate the precise probability for the player with better ranking to win the game. | "TenisRank": A new ranking of tennis players based on PageRank | 9,463 |
The cross-site linking function is widely adopted by online social networks (OSNs). This function allows a user to link her account on one OSN to her accounts on other OSNs. Thus, users are able to sign in with the linked accounts, share contents among these accounts and import friends from them. It leads to the service integration of different OSNs. This integration not only provides convenience for users to manage accounts of different OSNs, but also introduces usefulness to OSNs that adopt the cross-site linking function. In this paper, we investigate this usefulness based on users' data collected from a popular OSN called Medium. We conduct a thorough analysis on its social graph, and find that the service integration brought by the cross-site linking function is able to change Medium's social graph structure and attract a large number of new users. However, almost none of the new users would become high PageRank users (PageRank is used to measure a user's influence in an OSN). To solve this problem, we build a machine-learning-based model to predict high PageRank users in Medium based on their Twitter data only. This model achieves a high F1-score of 0.942 and a high area under the curve (AUC) of 0.986. Based on it, we design a system to assist new OSNs to identify and attract high PageRank users from other well-established OSNs through the cross-site linking function. | Understanding Service Integration of Online Social Networks: A
Data-Driven Study | 9,464 |
In this paper, we interpret the community question answering websites on the StackExchange platform as knowledge markets, and analyze how and why these markets can fail at scale. A knowledge market framing allows site operators to reason about market failures, and to design policies to prevent them. Our goal is to provide insights on large-scale knowledge market failures through an interpretable model. We explore a set of interpretable economic production models on a large empirical dataset to analyze the dynamics of content generation in knowledge markets. Amongst these, the Cobb-Douglas model best explains empirical data and provides an intuitive explanation for content generation through concepts of elasticity and diminishing returns. Content generation depends on user participation and also on how specific types of content (e.g. answers) depends on other types (e.g. questions). We show that these factors of content generation have constant elasticity---a percentage increase in any of the inputs leads to a constant percentage increase in the output. Furthermore, markets exhibit diminishing returns---the marginal output decreases as the input is incrementally increased. Knowledge markets also vary on their returns to scale---the increase in output resulting from a proportionate increase in all inputs. Importantly, many knowledge markets exhibit diseconomies of scale---measures of market health (e.g., the percentage of questions with an accepted answer) decrease as a function of number of participants. The implications of our work are two-fold: site operators ought to design incentives as a function of system size (number of participants); the market lens should shed insight into complex dependencies amongst different content types and participant actions in general social networks. | The Size Conundrum: Why Online Knowledge Markets Can Fail at Scale | 9,465 |
Given a time-evolving graph, how can we track similarity between nodes in a fast and accurate way, with theoretical guarantees on the convergence and the error? Random Walk with Restart (RWR) is a popular measure to estimate the similarity between nodes and has been exploited in numerous applications. Many real-world graphs are dynamic with frequent insertion/deletion of edges; thus, tracking RWR scores on dynamic graphs in an efficient way has aroused much interest among data mining researchers. Recently, dynamic RWR models based on the propagation of scores across a given graph have been proposed, and have succeeded in outperforming previous other approaches to compute RWR dynamically. However, those models fail to guarantee exactness and convergence time for updating RWR in a generalized form. In this paper, we propose OSP, a fast and accurate algorithm for computing dynamic RWR with insertion/deletion of nodes/edges in a directed/undirected graph. When the graph is updated, OSP first calculates offset scores around the modified edges, propagates the offset scores across the updated graph, and then merges them with the current RWR scores to get updated RWR scores. We prove the exactness of OSP and introduce OSP-T, a version of OSP which regulates a trade-off between accuracy and computation time by using error tolerance {\epsilon}. Given restart probability c, OSP-T guarantees to return RWR scores with O ({\epsilon} /c ) error in O (log ({\epsilon}/2)/log(1-c)) iterations. Through extensive experiments, we show that OSP tracks RWR exactly up to 4605x faster than existing static RWR method on dynamic graphs, and OSP-T requires up to 15x less time with 730x lower L1 norm error and 3.3x lower rank error than other state-of-the-art dynamic RWR methods. | Fast and Accurate Random Walk with Restart on Dynamic Graphs with
Guarantees | 9,466 |
Social media has changed the landscape of marketing and consumer research as the adoption and promotion of businesses is becoming more and more dependent on how the customers are interacting and feeling about the business on platforms like Facebook, Twitter, Yelp etc. Social review websites like Yelp have become an important source of information about different businesses. Social influence on these online platforms can result in individuals adopting or promoting ideas and actions resulting in information cascades. Research on information cascades have been gaining popularity over the last few years but most of the research has been focused on platforms like Twitter and Facebook. Research on the adoption or promotion of product using cascades can help determine important latent patterns of social influence. In this work, we have analyzed the spread of information i.e. cascades in Yelp across different cities in Europe and North America. We have enumerated and analyzed different cascade topologies that occur in the Yelp social networks. Some of our significant findings include the presence of a significant number of cascades in Yelp reviews indicating the importance of social influence, heavy-tailed distribution of cascades and possibility to accurately predict the size of cascades on the basis of initial reviews. In addition, we have also found that the characteristics of the non-root nodes and the non-root reviews are much more important type of feature as compared to the properties of the root nodes and root reviews of the cascade. These findings can help social scientists to analyze customer behavior across different cities in a much more systematic way. Furthermore, it can also help the businesses in a city to figure out different consumer trends and hence improve their processes and offerings. | Cascading Behavior in Yelp Reviews | 9,467 |
Information flows are the result of a constant exchange in Online Social Networks (OSNs). OSN users create and share varying types of information in real-time throughout a day. Virality is introduced as a term to describe information that reaches a wide audience within a small time-frame. As a case, we measure propagation of information submitted in Reddit, identify different patterns and present a multi OSN diffusion analysis on Twitter, Facebook, and 2 hosting domains for images and multimedia, ImgUr and YouTube. Our results indicate that positive content is the most shared and presents the highest virality probability, and the overall virality probability of user created information is low. Finally, we underline the problems of limited access in OSN data. Keywords: Online Social Networks, Virality, Diffusion, Viral Content, Reddit, Twitter, Facebook, ImgUr, YouTube | Viral content propagation in Online Social Networks | 9,468 |
Many social media researchers and data scientists collected geo-tagged tweets to conduct spatial analysis or identify spatiotemporal patterns of filtered messages for specific topics or events. This paper provides a systematic view to illustrate the characteristics (data noises, user biases, and system errors) of geo-tagged tweets from the Twitter Streaming API. First, we found that a small percentage (1%) of active Twitter users can create a large portion (16%) of geo-tagged tweets. Second, there is a significant amount (57.3%) of geo-tagged tweets located outside the Twitter Streaming API's bounding box in San Diego. Third, we can detect spam, bot, cyborg tweets (data noises) by examining the "source" metadata field. The portion of data noises in geo-tagged tweets is significant (29.42% in San Diego, CA and 53.47% in Columbus, OH) in our case study. Finally, the majority of geo-tagged tweets are not created by the generic Twitter apps in Android or iPhone devices, but by other platforms, such as Instagram and Foursquare. We recommend a multi-step procedure to remove these noises for the future research projects utilizing geo-tagged tweets. | Identifying Data Noises, User Biases, and System Errors in Geo-tagged
Twitter Messages (Tweets) | 9,469 |
The advent of social networks poses severe threats on user privacy as adversaries can de-anonymize users' identities by mapping them to correlated cross-domain networks. Without ground-truth mapping, prior literature proposes various cost functions in hope of measuring the quality of mappings. However, there is generally a lacking of rationale behind the cost functions, whose minimizer also remains algorithmically unknown. We jointly tackle above concerns under a more practical social network model parameterized by overlapping communities, which, neglected by prior art, can serve as side information for de-anonymization. Regarding the unavailability of ground-truth mapping to adversaries, by virtue of the Minimum Mean Square Error (MMSE), our first contribution is a well-justified cost function minimizing the expected number of mismatched users over all possible true mappings. While proving the NP-hardness of minimizing MMSE, we validly transform it into the weighted-edge matching problem (WEMP), which, as disclosed theoretically, resolves the tension between optimality and complexity: (i) WEMP asymptotically returns a negligible mapping error in large network size under mild conditions facilitated by higher overlapping strength; (ii) WEMP can be algorithmically characterized via the convex-concave based de-anonymization algorithm (CBDA), perfectly finding the optimum of WEMP. Extensive experiments further confirm the effectiveness of CBDA under overlapping communities, in terms of averagely 90% re-identified users in the rare true cross-domain co-author networks when communities overlap densely, and roughly 70% enhanced re-identification ratio compared to non-overlapping cases. | De-anonymizing Social Networks with Overlapping Community Structure | 9,470 |
Community detection is an important information mining task to uncover modular structures in large networks. For increasingly common large network data sets, global community detection is prohibitively expensive, and attention has shifted to methods that mine local communities, i.e. identifying all latent members of a particular community from a few labeled seed members. To address such semi-supervised mining task, we systematically develop a local spectral subspace-based community detection method, called LOSP. We define a family of local spectral subspaces based on Krylov subspaces, and seek a sparse indicator for the target community via an $\ell_1$ norm minimization over the Krylov subspace. Variants of LOSP depend on type of random walks with different diffusion speeds, type of random walks, dimension of the local spectral subspace and step of diffusions. The effectiveness of the proposed LOSP approach is theoretically analyzed based on Rayleigh quotients, and it is experimentally verified on a wide variety of real-world networks across social, production and biological domains, as well as on an extensive set of synthetic LFR benchmark datasets. | Krylov Subspace Approximation for Local Community Detection in Large
Networks | 9,471 |
Hiring a head coach of a college sports team is vital which will definitely have a great influence on the later development of the team. However, a lot of attention has been focused on each coach's individual features. A systematic and quantitative analysis of the whole coach hiring market is lacking. In a coach hiring network, the coaches are actually voting with their feet. It is interesting to analyze what factors are affecting the "footprint" left by those head coaches. In this paper, we collect more than 12,000 head coach hiring records in two different popular sports from the NCAA. Using network-based methods, we build the coach hiring network in the NCAA men's basketball and football. We find that: (1).the coach hiring network is of great inequality in coach production with a Gini coefficient close to 0.60. (2).coaches prefer to work within the same geographical region and the same division to their alma maters'. (3).the coach production rankings we calculated using network-based methods are generally correlated to the authoritative rankings, but also show disaccord in specific time period. The results provide us a novel view and better understanding of the coach hiring market in the NCAA and shed new light on the coach hiring system. | Inequalities, Preferences and Rankings in US Sports Coach Hiring
Networks | 9,472 |
Sociotechnological and geospatial processes exhibit time varying structure that make insight discovery challenging. To detect abnormal moments in these processes, a definition of `normal' must be established. This paper proposes a new statistical model for such systems, modeled as dynamic networks, to address this challenge. It assumes that vertices fall into one of k types and that the probability of edge formation at a particular time depends on the types of the incident nodes and the current time. The time dependencies are driven by unique seasonal processes, which many systems exhibit (e.g., predictable spikes in geospatial or web traffic each day). The paper defines the model as a generative process and an inference procedure to recover the `normal' seasonal processes from data when they are unknown. An outline of anomaly detection experiments to be completed over Enron emails and New York City taxi trips is presented. | Seasonal Stochastic Blockmodeling for Anomaly Detection in Dynamic
Networks | 9,473 |
The rapid growth of location-based services(LBSs)has greatly enriched people's urban lives and attracted millions of users in recent years. Location-based social networks(LBSNs)allow users to check-in at a physical location and share daily tips on points-of-interest (POIs) with their friends anytime and anywhere. Such check-in behavior can make daily real-life experiences spread quickly through the Internet. Moreover, such check-in data in LBSNs can be fully exploited to understand the basic laws of human daily movement and mobility. This paper focuses on reviewing the taxonomy of user modeling for POI recommendations through the data analysis of LBSNs. First, we briefly introduce the structure and data characteristics of LBSNs,then we present a formalization of user modeling for POI recommendations in LBSNs. Depending on which type of LBSNs data was fully utilized in user modeling approaches for POI recommendations, we divide user modeling algorithms into four categories: pure check-in data-based user modeling, geographical information-based user modeling, spatio-temporal information-based user modeling, and geo-social information-based user modeling. Finally,summarizing the existing works, we point out the future challenges and new directions in five possible aspects | User modeling for point-of-interest recommendations in location-based
social networks: the state-of-the-art | 9,474 |
People can be characterized by their demographic information and personality traits. Characterizing people accurately can help predict their preferences, and aid recommendations and advertising. A growing number of studies infer people's characteristics from behavioral data. However, context factors make behavioral data noisy, making these data harder to use for predictive analytics. In this paper, we demonstrate how to employ causal identification on feature selection and how to predict individuals' characteristics based on these selected features. We use visitors' choice data from a large theme park, combined with personality measurements, to investigate the causal relationship between visitors' characteristics and their choices in the park. We demonstrate the benefit of feature selection based on causal identification in a supervised prediction task for individual characteristics. Based on our evaluation, our models that trained with features selected based on causal identification outperformed existing methods. | Causal Feature Selection for Individual Characteristics Prediction | 9,475 |
Social media is becoming popular for news consumption due to its fast dissemination, easy access, and low cost. However, it also enables the wide propagation of fake news, i.e., news with intentionally false information. Detecting fake news is an important task, which not only ensures users to receive authentic information but also help maintain a trustworthy news ecosystem. The majority of existing detection algorithms focus on finding clues from news contents, which are generally not effective because fake news is often intentionally written to mislead users by mimicking true news. Therefore, we need to explore auxiliary information to improve detection. The social context during news dissemination process on social media forms the inherent tri-relationship, the relationship among publishers, news pieces, and users, which has potential to improve fake news detection. For example, partisan-biased publishers are more likely to publish fake news, and low-credible users are more likely to share fake news. In this paper, we study the novel problem of exploiting social context for fake news detection. We propose a tri-relationship embedding framework TriFN, which models publisher-news relations and user-news interactions simultaneously for fake news classification. We conduct experiments on two real-world datasets, which demonstrate that the proposed approach significantly outperforms other baseline methods for fake news detection. | Beyond News Contents: The Role of Social Context for Fake News Detection | 9,476 |
Thanks to their name recognition and popularity, celebrities play an important role in American politics. Celebrity endorsements could add to the momentum of a politician's campaign and win the candidate extensive media coverage. There is one caveat though: the political preference of celebrity followers might differ from that of the celebrity. In this paper we explore that possibility. By carefully studying six prominent endorsements to the leading presidential candidates in the 2016 U.S. presidential election and statistically modeling Twitter "follow" behavior, we show (1) followers of all the celebrities with the exception of Lady Gaga are more likely to follow a large number of candidates and (2) the opinion of celebrity followers could systematically differ from that of the celebrity. Our methodology can be generalized to the study of such events as NBA players' refusing to visit the White House and pop singers' meeting with Dalai Lama. | When Celebrities Endorse Politicians: Analyzing the Behavior of
Celebrity Followers in the 2016 U.S. Presidential Election | 9,477 |
This paper proposes an attributed network growth model. Despite the knowledge that individuals use limited resources to form connections to similar others, we lack an understanding of how local and resource-constrained mechanisms explain the emergence of rich structural properties found in real-world networks. We make three contributions. First, we propose a parsimonious and accurate model of attributed network growth that jointly explains the emergence of in-degree distributions, local clustering, clustering-degree relationship and attribute mixing patterns. Second, our model is based on biased random walks and uses local processes to form edges without recourse to global network information. Third, we account for multiple sociological phenomena: bounded rationality, structural constraints, triadic closure, attribute homophily, and preferential attachment. Our experiments indicate that the proposed Attributed Random Walk (ARW) model accurately preserves network structure and attribute mixing patterns of six real-world networks; it improves upon the performance of eight state-of-the-art models by a statistically significant margin of 2.5-10x. | Growing Attributed Networks through Local Processes | 9,478 |
This paper uses vine copula to analyze the multivariate statistical dependence in a massive YouTube dataset consisting of 6 million videos over 25 thousand channels. Specifically we study the statistical dependency of 7 YouTube meta-level metrics: view count, number of likes, number of comments, length of video title, number of subscribers, click rates, and average percentage watching. Dependency parameters such as the Kendall's tau and tail dependence coefficients are computed to evaluate the pair-wise dependence of these meta-level metrics. The vine copula model yields several interesting dependency structures. We show that view count and number of likes' are in the central position of the dependence structure. Conditioned on these two metrics, the other five meta-level metrics are virtually independent of each other. Also, Sports, Gaming, Fashion, Comedy videos have similar dependence structure to each other, while the News category exhibits a strong tail dependence. We also study Granger causality effects and upload dynamics and their impact on view count. Our findings provide a useful understanding of user engagement in YouTube. | Dependence Structure Analysis Of Meta-level Metrics in YouTube Videos: A
Vine Copula Approach | 9,479 |
Influence Maximization (IM) aims to maximize the number of people that become aware of a product by finding the `best' set of `seed' users to initiate the product advertisement. Unlike prior arts on static social networks containing fixed number of users, we undertake the first study of IM in more realistic evolving networks with temporally growing topology. The task of evolving IM ({\bfseries EIM}), however, is far more challenging over static cases in the sense that seed selection should consider its impact on future users and the probabilities that users influence one another also evolve over time. We address the challenges through $\mathbb{EIM}$, a newly proposed bandit-based framework that alternates between seed nodes selection and knowledge (i.e., nodes' growing speed and evolving influences) learning during network evolution. Remarkably, $\mathbb{EIM}$ involves three novel components to handle the uncertainties brought by evolution: | Evolving Influence Maximization in Evolving Networks | 9,480 |
Content polluters, or bots that hijack a conversation for political or advertising purposes are a known problem for event prediction, election forecasting and when distinguishing real news from fake news in social media data. Identifying this type of bot is particularly challenging, with state-of-the-art methods utilising large volumes of network data as features for machine learning models. Such datasets are generally not readily available in typical applications which stream social media data for real-time event prediction. In this work we develop a methodology to detect content polluters in social media datasets that are streamed in real-time. Applying our method to the problem of civil unrest event prediction in Australia, we identify content polluters from individual tweets, without collecting social network or historical data from individual accounts. We identify some peculiar characteristics of these bots in our dataset and propose metrics for identification of such accounts. We then pose some research questions around this type of bot detection, including: how good Twitter is at detecting content polluters and how well state-of-the-art methods perform in detecting bots in our dataset. | Real-time Detection of Content Polluters in Partially Observable Twitter
Networks | 9,481 |
In this dataset paper we describe our work on the collection and analysis of public WhatsApp group data. Our primary goal is to explore the feasibility of collecting and using WhatsApp data for social science research. We therefore present a generalisable data collection methodology, and a publicly available dataset for use by other researchers. To provide context, we perform statistical exploration to allow researchers to understand what public WhatsApp group data can be collected and how this data can be used. Given the widespread use of WhatsApp, our techniques to obtain public data and potential applications are important for the community. | WhatsApp, Doc? A First Look at WhatsApp Public Group Data | 9,482 |
This work is a technical approach to modeling false information nature, design, belief impact and containment in multi-agent networks. We present a Bayesian mathematical model for source information and viewer's belief, and how the former impacts the latter in a media (network) of broadcasters and viewers. Given the proposed model, we study how a particular information (true or false) can be optimally designed into a report, so that on average it conveys the most amount of the original intended information to the viewers of the network. Consequently, the model allows us to study susceptibility of a particular group of viewers to false information, as a function of statistical metrics of the their prior beliefs (e.g. bias, hesitation, open-mindedness, credibility assessment etc.). In addition, based on the same model we can study false information "containment" strategies imposed by network administrators. Specifically, we study a credibility assessment strategy, where every disseminated report must be within a certain distance of the truth. We study the trade-off between false and true information-belief convergence using this scheme which leads to ways for optimally deciding how truth sensitive an information dissemination network should operate. | A Bayesian Model for False Information Belief Impact, Optimal Design,
and Fake News Containment | 9,483 |
It is often said that constraints affect creative production, both in terms of form and quality. Online social media platforms frequently impose constraints on the content that users can produce, limiting the range of possible contributions. Do these restrictions tend to push creators towards producing more or less successful content? How do creators adapt their contributions to fit the limits imposed by social media platforms? To answer these questions, we conduct an observational study of a recent event: on November 7, 2017, Twitter changed the maximum allowable length of a tweet from 140 to 280 characters, thereby significantly altering its signature constraint. In the first study of this switch, we compare tweets with nearly or exactly 140 characters before the change to tweets of the same length posted after the change. This setup enables us to characterize how users alter their tweets to fit the constraint and how this affects their tweets' success. We find that in response to a length constraint, users write more tersely, use more abbreviations and contracted forms, and use fewer definite articles. Also, although in general tweet success increases with length, we find initial evidence that tweets made to fit the 140-character constraint tend to be more successful than similar-length tweets written when the constraint was removed, suggesting that the length constraint improved tweet quality. | How Constraints Affect Content: The Case of Twitter's Switch from 140 to
280 Characters | 9,484 |
Social media is a rich source of user behavior and opinions. Twitter senses nearly 500 million tweets per day from 328 million users.An appropriate machine learning pipeline over this information enables up-to-date and cost-effective data collection for a wide variety of domains such as; social science, public health, the wisdom of the crowd, etc. In many of the domains, users demographic information is key to the identification of segments of the populations being studied. For instance, Which age groups are observed to abuse which drugs?, Which ethnicities are most affected by depression per location?. Twitter in its current state does not require users to provide any demographic information. We propose to create a machine learning system coupled with the DBpedia graph that predicts the most probable age of the Twitter user. In our process to build an age prediction model using social media text and user meta-data, we explore the existing state of the art approaches. Detailing our data collection, feature engineering cycle, model selection and evaluation pipeline, we will exhibit the efficacy of our approach by comparing with the "predict mean" age estimator baseline. | What's my age?: Predicting Twitter User's Age using Influential Friend
Network and DBpedia | 9,485 |
The internet has enabled collaborations at a scale never before possible, but the best practices for organizing such large collaborations are still not clear. Wikipedia is a visible and successful example of such a collaboration which might offer insight into what makes large-scale, decentralized collaborations successful. We analyze the relationship between the structural properties of WikiProject coeditor networks and the performance and efficiency of those projects. We confirm the existence of an overall performance-efficiency trade-off, while observing that some projects are higher than others in both performance and efficiency, suggesting the existence factors correlating positively with both. Namely, we find an association between low-degree coeditor networks and both high performance and high efficiency. We also confirm results seen in previous numerical and small-scale lab studies: higher performance with less skewed node distributions, and higher performance with shorter path lengths. We use agent-based models to explore possible mechanisms for degree-dependent performance and efficiency. We present a novel local-majority learning strategy designed to satisfy properties of real-world collaborations. The local-majority strategy as well as a localized conformity-based strategy both show degree-dependent performance and efficiency, but in opposite directions, suggesting that these factors depend on both network structure and learning strategy. Our results suggest possible benefits to decentralized collaborations made of smaller, more tightly-knit teams, and that these benefits may be modulated by the particular learning strategies in use. | Network Structure, Efficiency, and Performance in WikiProjects | 9,486 |
As of 2018, YouTube, the major online video sharing website, hosts multiple channels promoting right-wing content. In this paper, we observe issues related to hate, violence and discriminatory bias in a dataset containing more than 7,000 videos and 17 million comments. We investigate similarities and differences between users' comments and video content in a selection of right-wing channels and compare it to a baseline set using a three-layered approach, in which we analyze (a) lexicon, (b) topics and (c) implicit biases present in the texts. Among other results, our analyses show that right-wing channels tend to (a) contain a higher degree of words from "negative" semantic fields, (b) raise more topics related to war and terrorism, and (c) demonstrate more discriminatory bias against Muslims (in videos) and towards LGBT people (in comments). Our findings shed light not only into the collective conduct of the YouTube community promoting and consuming right-wing content, but also into the general behavior of YouTube users. | Analyzing Right-wing YouTube Channels: Hate, Violence and Discrimination | 9,487 |
The conventional way of summarizing ratings or sentiment of reviews of customers on products of an online shopping brand are not sufficient to evaluate the financial health of that brand. It overlooks the social standing and influence of individual customers. In this paper, we have proposed a tool named as Review Network for measuring the influence of customers in online merchandise sites like Amazon.com. Using this measured influence, we have proposed a method that evaluates loyalty of customers of a brand based on their ratings and sentiments of their reviews collected from online merchandise sites. Review network of a brand is built from all the reviews of all the products from that brand where nodes are customers and an edge is created if a customer becomes a potential reader of a review written by another customer. The centrality of a customer in that review network represents her influence. Our proposed method named as Social Promoter Score combines loyalty and centrality of all customers of a brand. We have compared our method with a baseline approach based on the concept of Net Promoter Score . We have applied Social Promoter Score on Amazon.com review data set of some well-known brands. Results show that Social Promoter Score predicts financial health of a brand in terms of future sales much better than baseline method. We have noticed that in general effects of Social Promoter Score reflect on the product sales in one to five months. | Social Promoter Score (SPS) and Review Network: A Method and a Tool for
Predicting Financial Health of an Online Shopping Brand | 9,488 |
The goal of this work is to systematically extract information from hacker forums, whose information would be in general described as unstructured: the text of a post is not necessarily following any writing rules. By contrast, many security initiatives and commercial entities are harnessing the readily public information, but they seem to focus on structured sources of information. Here, we focus on the problem of identifying malicious IP addresses, among the IP addresses which are reported in the forums. We develop a method to automate the identification of malicious IP addresses with the design goal of being independent of external sources. A key novelty is that we use a matrix decomposition method to extract latent features of the behavioral information of the users, which we combine with textual information from the related posts. A key design feature of our technique is that it can be readily applied to different language forums, since it does not require a sophisticated Natural Language Processing approach. In particular, our solution only needs a small number of keywords in the new language plus the users behavior captured by specific features. We also develop a tool to automate the data collection from security forums. Using our tool, we collect approximately 600K posts from 3 different forums. Our method exhibits high classification accuracy, while the precision of identifying malicious IP in post is greater than 88% in all three forums. We argue that our method can provide significantly more information: we find up to 3 times more potentially malicious IP address compared to the reference blacklist VirusTotal. As the cyber-wars are becoming more intense, having early accesses to useful information becomes more imperative to remove the hackers first-move advantage, and our work is a solid step towards this direction. | Mining actionable information from security forums: the case of
malicious IP addresses | 9,489 |
Emojis have been widely used in textual communications as a new way to convey nonverbal cues. An interesting observation is the various emoji usage patterns among different users. In this paper, we investigate the correlation between user personality traits and their emoji usage patterns, particularly on overall amounts and specific preferences. To achieve this goal, we build a large Twitter dataset which includes 352,245 users and over 1.13 billion tweets associated with calculated personality traits and emoji usage patterns. Our correlation and emoji prediction results provide insights into the power of diverse personalities that lead to varies emoji usage patterns as well as its potential in emoji recommendation tasks. | Mining the Relationship between Emoji Usage Patterns and Personality | 9,490 |
In recent years, Twitter has seen a proliferation of automated accounts or bots that send spam, offer clickbait, compromise security using malware, and attempt to skew public opinion. Previous research estimates that around 9% to 17% of Twitter accounts are bots contributing to between 16% to 56% of tweets on the medium. This paper introduces an unsupervised approach to detect Twitter spam campaigns in real-time. The bot groups we detect tweet duplicate content with shortened embedded URLs over extended periods of time. Our experiments with the detection protocol reveal that bots consistently account for 10% to 50% of tweets generated from 7 popular URL shortening services on Twitter. More importantly, we discover that bots using shortened URLs are connected to large scale spam campaigns that control thousands of domains. There appear to be two distinct mechanisms used to control bot groups and we investigate both in this paper. Our detection system runs 24/7 and actively collects bots involved in spam campaigns and adds them to an evolving database of malicious bots. We make our database of detected bots available for query through a REST API so others can filter tweets from malicious bots to get high quality Twitter datasets for analysis. | An Unsupervised Approach to Detect Spam Campaigns that Use Botnets on
Twitter | 9,491 |
Analysis and visualization of an information network can be facilitated better using an appropriate embedding of the network. Network embedding learns a compact low-dimensional vector representation for each node of the network, and uses this lower dimensional representation for different network analysis tasks. Only the structure of the network is considered by a majority of the current embedding algorithms. However, some content is associated with each node, in most of the practical applications, which can help to understand the underlying semantics of the network. It is not straightforward to integrate the content of each node in the current state-of-the-art network embedding methods. In this paper, we propose a nonnegative matrix factorization based optimization framework, namely FSCNMF which considers both the network structure and the content of the nodes while learning a lower dimensional representation of each node in the network. Our approach systematically regularizes structure based on content and vice versa to exploit the consistency between the structure and content to the best possible extent. We further extend the basic FSCNMF to an advanced method, namely FSCNMF++ to capture the higher order proximities in the network. We conduct experiments on real world information networks for different types of machine learning applications such as node clustering, visualization, and multi-class classification. The results show that our method can represent the network significantly better than the state-of-the-art algorithms and improve the performance across all the applications that we consider. | FSCNMF: Fusing Structure and Content via Non-negative Matrix
Factorization for Embedding Information Networks | 9,492 |
For any company, multiple channels are available for reaching a population in order to market its products. Some of the most well-known channels are (a) mass media advertisement, (b) recommendations using social advertisement, and (c) viral marketing using social networks. The company would want to maximize its reach while also accounting for simultaneous marketing of competing products, where the product marketings may not be independent. In this direction, we propose and analyze a multi-featured generalization of the classical linear threshold model. We hence develop a framework for integrating the considered marketing channels into the social network, and an approach for allocating budget among these channels. | An Integrated Framework for Competitive Multi-channel Marketing of
Multi-featured Products | 9,493 |
Novelty is a key ingredient of innovation but quantifying it is difficult. This is especially true for visual work like graphic design. Using designs shared on an online social network of professional digital designers, we measure visual novelty using statistical learning methods to compare an images features with those of images that have been created before. We then relate social network position to the novelty of the designers images. We find that on this professional platform, users with dense local networks tend to produce more novel but generally less successful images, with important exceptions. Namely, users making novel images while embedded in cohesive local networks are more successful. | And Now for Something Completely Different: Visual Novelty in an Online
Network of Designers | 9,494 |
The temporal dynamics of a complex system such as a social network or a communication network can be studied by understanding the patterns of link appearance and disappearance over time. A critical task along this understanding is to predict the link state of the network at a future time given a collection of link states at earlier time points. In existing literature, this task is known as link prediction in dynamic networks. Solving this task is more difficult than its counterpart in static networks because an effective feature representation of node-pair instances for the case of dynamic network is hard to obtain. To overcome this problem, we propose a novel method for metric embedding of node-pair instances of a dynamic network. The proposed method models the metric embedding task as an optimal coding problem where the objective is to minimize the reconstruction error, and it solves this optimization task using a gradient descent method. We validate the effectiveness of the learned feature representation by utilizing it for link prediction in various real-life dynamic networks. Specifically, we show that our proposed link prediction model, which uses the extracted feature representation for the training instances, outperforms several existing methods that use well-known link prediction features. | DyLink2Vec: Effective Feature Representation for Link Prediction in
Dynamic Networks | 9,495 |
Many Web platforms rely on user collaboration to generate high-quality content: Wiki, Q&A communities, etc. Understanding and modeling the different collaborative behaviors is therefore critical. However, collaboration patterns are difficult to capture when the relationships between users are not directly observable, since they need to be inferred from the user actions. In this work, we propose a solution to this problem by adopting a systemic view of collaboration. Rather than modeling the users as independent actors in the system, we capture their coordinated actions with embedding methods which can, in turn, identify shared objectives and predict future user actions. To validate our approach, we perform a study on a dataset comprising more than 16M user actions, recorded on the online collaborative sandbox Reddit r/place. Participants had access to a drawing canvas where they could change the color of one pixel at every fixed time interval. Users were not grouped in teams nor were given any specific goals, yet they organized themselves into a cohesive social fabric and collaborated to the creation of a multitude of artworks. Our contribution in this paper is multi-fold: i) we perform an in-depth analysis of the Reddit r/place collaborative sandbox, extracting insights about its evolution over time; ii) we propose a predictive method that captures the latent structure of the emergent collaborative efforts; and iii) we show that our method provides an interpretable representation of the social structure. | Latent Structure in Collaboration: the Case of Reddit r/place | 9,496 |
We study the problem of optimally investing in nodes of a social network in a competitive setting, where two camps aim to maximize adoption of their opinions by the population. In particular, we consider the possibility of campaigning in multiple phases, where the final opinion of a node in a phase acts as its initial biased opinion for the following phase. Using an extension of the popular DeGroot-Friedkin model, we formulate the utility functions of the camps, and show that they involve what can be interpreted as multiphase Katz centrality. Focusing on two phases, we analytically derive Nash equilibrium investment strategies, and the extent of loss that a camp would incur if it acted myopically. Our simulation study affirms that nodes attributing higher weightage to initial biases necessitate higher investment in the first phase, so as to influence these biases for the terminal phase. We then study the setting in which a camp's influence on a node depends on its initial bias. For single camp, we present a polynomial time algorithm for determining an optimal way to split the budget between the two phases. For competing camps, we show the existence of Nash equilibria under reasonable assumptions, and that they can be computed in polynomial time. | Optimal Multiphase Investment Strategies for Influencing Opinions in a
Social Network | 9,497 |
Harassing and hateful speech in online spaces has become a common problem for platform maintainers and their users. The toxicity created by such content can discourage user participation and engagement. Therefore, it is crucial for and a common goal of platform managers to diminish hateful and harmful content. Over the last year, Reddit, a major online platform, enacted a policy of banning sub-communities (subreddits) that they deem harassing, with the goal of diminishing such activities. We studied the effects of banning the largest hateful subreddit (r/fatpeoplehate or FPH) on the users and other subreddits that were associated with it. We found that, while a number of outcomes were possible --- in this case the subreddit ban led to a sustained reduced interaction of its members (FPH users) with the Reddit platform. We also found that the many counter-actions taken by FPH users were short-lived and promptly neutralized by both Reddit administrators and the admins of individual subreddits. Our findings show that forum-banning can be an effective means by which to diminish objectionable content. Moreover, our detailed analysis of the post-banning behavior of FPH users highlights a number of the behavioral patterns that banning can create. | The Aftermath of Disbanding an Online Hateful Community | 9,498 |
According to the Center for Disease Control and Prevention, in the United States hundreds of thousands initiate smoking each year, and millions live with smoking-related dis- eases. Many tobacco users discuss their habits and preferences on social media. This work conceptualizes a framework for targeted health interventions to inform tobacco users about the consequences of tobacco use. We designed a Twitter bot named Notobot (short for No-Tobacco Bot) that leverages machine learning to identify users posting pro-tobacco tweets and select individualized interventions to address their interest in tobacco use. We searched the Twitter feed for tobacco-related keywords and phrases, and trained a convolutional neural network using over 4,000 tweets dichotomously manually labeled as either pro- tobacco or not pro-tobacco. This model achieves a 90% recall rate on the training set and 74% on test data. Users posting pro- tobacco tweets are matched with former smokers with similar interests who posted anti-tobacco tweets. Algorithmic matching, based on the power of peer influence, allows for the systematic delivery of personalized interventions based on real anti-tobacco tweets from former smokers. Experimental evaluation suggests that our system would perform well if deployed. This research offers opportunities for public health researchers to increase health awareness at scale. Future work entails deploying the fully operational Notobot system in a controlled experiment within a public health campaign. | Social Bots for Online Public Health Interventions | 9,499 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.