text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
This work is organized as follows. In the first section we review the prior work and we have obtained our data. Next, we will look at address reuse in the Bitcoin network. We show that a great portion of users reuse their addresses which could enable us to cluster the addresses and attribute them to single users. Next, we will categorize the nodes based on their role in the network as a customer or seller. Finally, we do a study of nodes and network performance. | Predicting User Performance and Bitcoin Price Using Block Chain
Transaction Network | 9,500 |
In this work, we extend a previous work where we proposed a suitable state model built from a Karhunen-Loeve Transformation to build a new decision process from which, we can extract useful knowledge and information about the identified underlying sub-communities from an initial network. The aim of the method is to build a framework for a multi-level knowledge retrieval. Besides the capacity of the methodology to reduce the high dimensionality of the data, the new detection scheme is able to extract, from the sub-communities, the dense sub-groups with the definition and formulation of new quantities related to the notions of energy and co-energy. The energy of a node is defined as the rate of its participation to the set of activities while the notion of co-energy defines the rate of interaction/link between two nodes. These two important features are used to make each link weighted and bounded, so that we are able to perform a thorough refinement of the sub-community discovery. This study allows to perform a multi-level analysis by extracting information either per-link or per-intra-subcommunity. As an improvement of this work, we define the notion of pivot to relate the node(s) with the greatest influence in the network. We propose the use of a thorough tool based on the formulation of the transformation of a suitable probabilistic model into a possibilistic model to extrac | A Robust Process to Identify Pivots inside Sub-communities In Social
Networks | 9,501 |
Research articles produced through international collaboration are more highly cited than other work, but are they also more novel? Using measures developed by Uzzi et al. (2013), and replicated by Boyack and Klavans (2014), this article tests for novelty and conventionality in international research collaboration. Scholars have found that coauthored articles are more novel and have suggested that diverse groups have a greater chance of producing creative work. As such, we expected to find that international collaboration tends to produce more novel research. Using data from Web of Science and Scopus in 2005, we failed to show that international collaboration tends to produce more novel articles. In fact, international collaboration appears to produce less novel and more conventional knowledge combinations. Transaction costs and communication barriers to international collaboration may suppress novelty. Higher citations to international work may be explained by an audience effect, where more authors from more countries results in greater access to a larger citing community. The findings are consistent with explanations of growth in international collaboration that posit a social dynamic of preferential attachment based upon reputation. | International Research Collaboration: Novelty, Conventionality, and
Atypicality in Knowledge Recombination | 9,502 |
We present a network-based recommender system for live shows (concerts, theater, circus, etc) that finds a set of people probably interested in a given, new show. We combine collaborative and content-based filtering to take benefit of past activity of users and of the features of the new show. Indeed, as this show is new we cannot rely on collaborative filtering only. To solve this cold-start problem, we perform network alignment and insert the new show in a way consistent with collaborative filtering. We refine the obtained similarities using spreading in the network. We illustrate the performances of our system on a large scale real-world dataset. | Propagation of content similarity through a collaborative network for
live show recommendation | 9,503 |
Imbalanced data widely exists in many high-impact applications. An example is in air traffic control, where we aim to identify the leading indicators for each type of accident cause from historical records. Among all three types of accident causes, historical records with 'personnel issues' are much more than the other two types ('aircraft issues' and 'environmental issues') combined. Thus, the resulting dataset is highly imbalanced, and can be naturally modeled as a network. Up until now, most existing work on imbalanced data analysis focused on the classification setting, and very little is devoted to learning the node representation from imbalanced networks. To address this problem, in this paper, we propose Vertex-Diminished Random Walk (VDRW) for imbalanced network analysis. The key idea is to encourage the random particle to walk within the same class by adjusting the transition probabilities each step. It resembles the existing Vertex Reinforced Random Walk in terms of the dynamic nature of the transition probabilities, as well as some convergence properties. However, it is more suitable for analyzing imbalanced networks as it leads to more separable node representations in the embedding space. Then, based on VDRW, we propose a semi-supervised network representation learning framework named ImVerde for imbalanced networks, in which context sampling uses VDRW and the label information to create node-context pairs, and balanced-batch sampling adopts a simple under-sampling method to balance these pairs in different classes. Experimental results demonstrate that ImVerde based on VDRW outperforms state-of-the-art algorithms for learning network representation from imbalanced data. | ImVerde: Vertex-Diminished Random Walk for Learning Network
Representation from Imbalanced Data | 9,504 |
We present a study that examines how a social media activism campaign aimed at improving gender diversity within engineering gained and maintained momentum in its early period. We examined over 50,000 Tweets posted over the first ~75 days of the #ILookLikeAnEngineer campaign and found that diverse participation - of types of users - increased activity at crucial moments. We categorize these triggers into four types: 1) Event-Driven: Alignment of the campaign with offline events related to the issue (Diversity SFO, Disrupt, etc.); 2) Media-Driven: News coverage of the events in the media (TechCrunch, CNN, BBC, etc.); 3) Industry-Driven: Web participation in the campaign by large organizations (Microsoft, Tesla, GE, Cisco, etc.); and 4) Personality-Driven: Alignment of the events with popular and/or known personalities (e.g. Isis Anchalee; Michelle Sun; Ada Lovelace.) This study illustrates how one mechanism - triggering - supports connective action in social media campaign. | How Diverse Users and Activities Trigger Connective Action via Social
Media: Lessons from the Twitter Hashtag Campaign #ILookLikeAnEngineer | 9,505 |
Adversaries leverage social network friend relationships to collect sensitive data from users and target them with abuse that includes fake news, cyberbullying, malware, and propaganda. Case in point, 71 out of 80 user study participants had at least 1 Facebook friend with whom they never interact, either in Facebook or in real life, or whom they believe is likely to abuse their posted photos or status updates, or post offensive, false or malicious content. We introduce AbuSniff, a system that identifies Facebook friends perceived as strangers or abusive, and protects the user by unfriending, unfollowing, or restricting the access to information for such friends. We develop a questionnaire to detect perceived strangers and friend abuse.We introduce mutual Facebook activity features and show that they can train supervised learning algorithms to predict questionnaire responses. We have evaluated AbuSniff through several user studies with a total of 263 participants from 25 countries. After answering the questionnaire, participants agreed to unfollow and restrict abusers in 91.6% and 90.9% of the cases respectively, and sandbox or unfriend non-abusive strangers in 92.45% of the cases. Without answering the questionnaire, participants agreed to take the AbuSniff suggested action against friends predicted to be strangers or abusive, in 78.2% of the cases. AbuSniff increased the participant self-reported willingness to reject invitations from strangers and abusers, their awareness of friend abuse implications and their perceived protection from friend abuse. | AbuSniff: Automatic Detection and Defenses Against Abusive Facebook
Friends | 9,506 |
Social media for news consumption is becoming increasingly popular due to its easy access, fast dissemination, and low cost. However, social media also enable the wide propagation of "fake news", i.e., news with intentionally false information. Fake news on social media poses significant negative societal effects, and also presents unique challenges. To tackle the challenges, many existing works exploit various features, from a network perspective, to detect and mitigate fake news. In essence, news dissemination ecosystem involves three dimensions on social media, i.e., a content dimension, a social dimension, and a temporal dimension. In this chapter, we will review network properties for studying fake news, introduce popular network types and how these networks can be used to detect and mitigation fake news on social media. | Studying Fake News via Network Analysis: Detection and Mitigation | 9,507 |
Users of social networks often focus on specific areas of that network, leading to the well-known "filter bubble" effect. Connecting people to a new area of the network in a way that will cause them to become active in that area could help alleviate this effect and improve social welfare. Here we present preliminary analysis of network referrals, that is, attempts by users to connect peers to other areas of the network. We classify these referrals by their efficiency, i.e., the likelihood that a referral will result in a user becoming active in the new area of the network. We show that by using features describing past experience of the referring author and the content of their messages we are able to predict whether referral will be effective, reaching an AUC of 0.87 for those users most experienced in writing efficient referrals. Our results represent a first step towards algorithmically constructing efficient referrals with the goal of mitigating the "filter bubble" effect pervasive in on line social networks. | Characterizing Efficient Referrals in Social Networks | 9,508 |
The right to protest is perceived as one of the primary civil rights. Citizens participate in mass demonstrations to express themselves and exercise their democratic rights. However, because of the large number of participants, protests may lead to violence and destruction, and hence can be costly. Thus, it is important to predict such demonstrations in advance to safeguard against such damages. Recent research has shown that about 75 percent of protests that are regarded as legal, are planned in advance. Twitter, the prominent micro-blogging website, has been used as a tool by protestors for planning, organizing, and announcing many of the recent protests worldwide such as those that led to the Arab Spring, Britain riots, and those against Mr. Trump after the presidential election in the U.S. In this paper, we aim to predict protests by means of machine learning algorithms. In particular, we consider the case of protests against the then-president-elect Mr. Trump after the results of the presidential election were announced in November 2016. We first identify the hashtags calling for demonstration from Trending Topics on Twitter, and download the corresponding tweets. We then apply four machine learning algorithms to make predictions. Our findings indicate that Twitter can be used as a powerful tool for predicting future protests with an average prediction accuracy of over 75 percent (up to 100 percent). We further validate our model by predicting the protests held in the U.S. airports after President Trump's executive order banning citizens of seven Muslim countries from entering the U.S. An important contribution of our study is the inclusion of event specific features for prediction purposes which helps to achieve high levels of accuracy. | Twitter Reveals: Using Twitter Analytics to Predict Public Protests | 9,509 |
Recently, many online social networks, such as MySpace, Orkut, and Friendster, have faced inactivity decay of their members, which contributed to the collapse of these networks. The reasons, mechanics, and prevention mechanisms of such inactivity decay are not fully understood. In this work, we analyze decayed and alive sub-websites from the StackExchange platform. The analysis mainly focuses on the inactivity cascades that occur among the members of these communities. We provide measures to understand the decay process and statistical analysis to extract the patterns that accompany the inactivity decay. Additionally, we predict cascade size and cascade virality using machine learning. The results of this work include a statistically significant difference of the decay patterns between the decayed and the alive sub-websites. These patterns are mainly: cascade size, cascade virality, cascade duration, and cascade similarity. Additionally, the contributed prediction framework showed satisfactory prediction results compared to a baseline predictor. Supported by empirical evidence, the main findings of this work are: (1) the decay process is not governed by only one network measure; it is better described using multiple measures; (2) the expert members of the StackExchange sub-websites were mainly responsible for the activity or inactivity of the StackExchange sub-websites; (3) the Statistics sub-website is going through decay dynamics that may lead to it becoming fully-decayed; and (4) decayed sub-websites were originally less resilient to inactivity decay, unlike the alive sub-websites. | Postmortem Analysis of Decayed Online Social Communities: Cascade
Pattern Analysis and Prediction | 9,510 |
Online social networks (OSNs) are trendy and rapid information propagation medium on the web where millions of new connections either positive such as acquaintance or negative such as animosity, are being established every day around the world. The negative links (or sometimes we can say harmful connections) are mostly established by fake profiles as they are being created by minds with ill aims. Detecting negative (or suspicious) links within online users can better aid in mitigation of fake profiles from OSNs. A modified clustering coefficient formula, named as Mutual Clustering Coefficient represented by M_cc, is introduced to quantitatively measure the connectivity between the mutual friends of two connected users in a group. In this paper, we present a classification system based on mutual clustering coefficient and profile information of users to detect the suspicious links within the user communities. Profile information helps us to find the similarity between users. Different similarity measures have been employed to calculate the profile similarity between a connected user pair. Experimental results demonstrate that four basic and easily available features such as work(w),education(e),home_town(ht)and current_city(cc) along with M_CC play a vital role in designing a successful classification system for the detection of suspicious links. | Mutual Clustering Coefficient-based Suspicious-link Detection approach
for Online Social Networks | 9,511 |
A growing body of evidence has shown that incorporating behavioral economics principles into the design of financial incentive programs helps improve their cost-effectiveness, promote individuals' short-term engagement, and increase compliance in health behavior interventions. Yet, their effects on long-term engagement have not been fully examined. In study designs where repeated administration of incentives is required to ensure the regularity of behaviors, the effectiveness of subsequent incentives may decrease as a result of the law of diminishing marginal utility. In this paper, we introduce random-loss incentive -- a new financial incentive based on loss aversion and unpredictability principles -- to address the problem of individuals' growing insensitivity to repeated interventions over time. We evaluate the new incentive design by conducting a randomized controlled trial to measure the influences of random losses on participants' dietary self-tracking and self-reporting compliance using a mobile web application called Eat & Tell. The results show that random losses are significantly more effective than fixed losses in encouraging long-term engagement. | Eat & Tell: A Randomized Trial of Random-Loss Incentive to Increase
Dietary Self-Tracking Compliance | 9,512 |
The roles of different nodes within a network are often understood through centrality analysis, which aims to quantify the capacity of a node to influence, or be influenced by, other nodes via its connection topology. Many different centrality measures have been proposed, but the degree to which they offer unique information, and such whether it is advantageous to use multiple centrality measures to define node roles, is unclear. Here we calculate correlations between 17 different centrality measures across 212 diverse real-world networks, examine how these correlations relate to variations in network density and global topology, and investigate whether nodes can be clustered into distinct classes according to their centrality profiles. We find that centrality measures are generally positively correlated to each other, the strength of these correlations varies across networks, and network modularity plays a key role in driving these cross-network variations. Data-driven clustering of nodes based on centrality profiles can distinguish different roles, including topological cores of highly central nodes and peripheries of less central nodes. Our findings illustrate how network topology shapes the pattern of correlations between centrality measures and demonstrate how a comparative approach to network centrality can inform the interpretation of nodal roles in complex networks. | Consistency and differences between centrality measures across distinct
classes of networks | 9,513 |
While graph-based collaborative filtering recommender systems have been introduced several years ago, there are still several shortcomings to deal with, the temporal information being one of the most important. The new link stream paradigm is aiming at extending graphs for correctly modelling the graph dynamics, without losing crucial information. We investigate the impact of such link stream features for recommender systems. by designing link stream features, that capture the intrinsic structure and dynamics of the data. We show that such features encode a fine-grained and subtle description of the underlying recommender system. Focusing on a traditional recommender system context, the rating prediction on the MovieLens20M dataset, we input these features along with some content-based ones into a gradient boosting machine (XGBoost) and show that it outperforms significantly a sole content-based solution. These encouraging results call for further exploration of this original modelling and its integration to complete state-of-the-art recommender systems algorithms. Link streams and graphs, as natural visualizations of recommender systems, can offer more interpretability in a time when algorithm transparency is an increasingly important topic of discussion. We also hope to sparkle interesting discussions in the community about the links between link streams and tensor factorization methods: indeed, they are two sides of the same object. | Movie rating prediction using content-based and link stream features | 9,514 |
In this paper we propose a new concept to prioritize the importance of a link in a directed network graph based on an ideal flow distribution. An ideal flow is the infinite limit of relative aggregated count of random walk agents' trajectories on a network graph distributed over space and time. The standard ideal flow, which is uniformly distributed flow over space and time, maximize the entropy for the utilization of a network. We show that the simulated trajectories of random walk agents would form an ideal relative flow distribution is converged to stationary values. This implies that ideal flow matrix depends only on the network structure. Ideal flow matrix is invariant to scalar multiplication and remarkably it is always premagic. Demonstration of ideal flow to the real world network was fitted into Sioux Falls transportation network. | Ideal Relative Flow Distribution on Directed Network | 9,515 |
Network embedding, which aims to learn low-dimensional representations of nodes, has been used for various graph related tasks including visualization, link prediction and node classification. Most existing embedding methods rely solely on network structure. However, in practice we often have auxiliary information about the nodes and/or their interactions, e.g., content of scientific papers in co-authorship networks, or topics of communication in Twitter mention networks. Here we propose a novel embedding method that uses both network structure and edge attributes to learn better network representations. Our method jointly minimizes the reconstruction error for higher-order node neighborhood, social roles and edge attributes using a deep architecture that can adequately capture highly non-linear interactions. We demonstrate the efficacy of our model over existing state-of-the-art methods on a variety of real-world networks including collaboration networks, and social networks. We also observe that using edge attributes to inform network embedding yields better performance in downstream tasks such as link prediction and node classification. | Capturing Edge Attributes via Network Embedding | 9,516 |
Many musicians, from up-and-comers to established artists, rely heavily on performing live to promote and disseminate their music. To advertise live shows, artists often use concert discovery platforms that make it easier for their fans to track tour dates. In this paper, we ask whether digital traces of live performances generated on those platforms can be used to understand career trajectories of artists. First, we present a new dataset we constructed by cross-referencing data from such platforms. We then demonstrate how this dataset can be used to mine and predict important career milestones for the musicians, such as signing by a major music label, or performing at a certain venue. Finally, we perform a temporal analysis of the bipartite artist-venue graph, and demonstrate that high centrality on this graph is correlated with success. | Mining and Forecasting Career Trajectories of Music Artists | 9,517 |
In this paper, we study a problem of detecting the source of diffused information by querying individuals, given a sample snapshot of the information diffusion graph, where two queries are asked: {\em (i)} whether the respondent is the source or not, and {\em (ii)} if not, which neighbor spreads the information to the respondent. We consider the case when respondents may not always be truthful and some cost is taken for each query. Our goal is to quantify the necessary and sufficient budgets to achieve the detection probability $1-\delta$ for any given $0<\delta<1.$ To this end, we study two types of algorithms: adaptive and non-adaptive ones, each of which corresponds to whether we adaptively select the next respondents based on the answers of the previous respondents or not. We first provide the information theoretic lower bounds for the necessary budgets in both algorithm types. In terms of the sufficient budgets, we propose two practical estimation algorithms, each of non-adaptive and adaptive types, and for each algorithm, we quantitatively analyze the budget which ensures $1-\delta$ detection accuracy. This theoretical analysis not only quantifies the budgets needed by practical estimation algorithms achieving a given target detection accuracy in finding the diffusion source, but also enables us to quantitatively characterize the amount of extra budget required in non-adaptive type of estimation, refereed to as {\em adaptivity gap}. We validate our theoretical findings over synthetic and real-world social network topologies. | Necessary and Sufficient Budgets in Information Source Finding with
Querying: Adaptivity Gap | 9,518 |
Web 2.0 helps to expand the range and depth of conversation on many issues and facilitates the formation of online communities. Online communities draw various individuals together based on their common opinions on a core set of issues. Most existing community detection methods merely focus on discovering communities without providing any insight regarding the collective opinions of community members and the motives behind the formation of communities. Several efforts have been made to tackle this problem by presenting a set of keywords as a community profile. However, they neglect the positions of community members towards keywords, which play an important role for understanding communities in the highly polarized atmosphere of social media. To this end, we present a sentiment-driven community profiling and detection framework which aims to provide community profiles presenting positive and negative collective opinions of community members separately. With this regard, our framework initially extracts key expressions in users' messages as representative of issues and then identifies users' positive/negative attitudes towards these key expressions. Next, it uncovers a low-dimensional latent space in order to cluster users according to their opinions and social interactions (i.e., retweets). We demonstrate the effectiveness of our framework through quantitative and qualitative evaluations. | Sentiment-driven Community Profiling and Detection on Social Media | 9,519 |
Heterogeneous networks not only present a challenge of heterogeneity in the types of nodes and relations, but also the attributes and content associated with the nodes. While recent works have looked at representation learning on homogeneous and heterogeneous networks, there is no work that has collectively addressed the following challenges: (a) the heterogeneous structural information of the network consisting of multiple types of nodes and relations; (b) the unstructured semantic content (e.g., text) associated with nodes; and (c) online updates due to incoming new nodes in growing network. We address these challenges by developing a Content-Aware Representation Learning model (CARL). CARL performs joint optimization of heterogeneous SkipGram and deep semantic encoding for capturing both heterogeneous structural closeness and unstructured semantic relations among all nodes, as function of node content, that exist in the network. Furthermore, an additional online update module is proposed for efficiently learning representations of incoming nodes. Extensive experiments demonstrate that CARL outperforms state-of-the-art baselines in various heterogeneous network mining tasks, such as link prediction, document retrieval, node recommendation and relevance search. We also demonstrate the effectiveness of the CARL's online update module through a category visualization study. | CARL: Content-Aware Representation Learning for Heterogeneous Networks | 9,520 |
People increasingly use microblogging platforms such as Twitter during natural disasters and emergencies. Research studies have revealed the usefulness of the data available on Twitter for several disaster response tasks. However, making sense of social media data is a challenging task due to several reasons such as limitations of available tools to analyze high-volume and high-velocity data streams. This work presents an extensive multidimensional analysis of textual and multimedia content from millions of tweets shared on Twitter during the three disaster events. Specifically, we employ various Artificial Intelligence techniques from Natural Language Processing and Computer Vision fields, which exploit different machine learning algorithms to process the data generated during the disaster events. Our study reveals the distributions of various types of useful information that can inform crisis managers and responders as well as facilitate the development of future automated systems for disaster management. | A Twitter Tale of Three Hurricanes: Harvey, Irma, and Maria | 9,521 |
How intrusive does a life-saving user-monitoring application really need to be? While most previous research was focused on analyzing mental state of users from social media and smartphones, there is little effort towards protecting user privacy in these analyses. A challenge in analyzing user behaviors is that not only is the data multi-dimensional with a myriad of user activities but these activities occur at varying temporal rates. The overarching question of our work is: Given a set of sensitive user features, what is the minimum amount of information required to group users with similar behavior? Furthermore, does this user behavior correlate with their mental state? Towards answering those questions, our contributions are two fold: we introduce the concept of privacy surfaces that combine sensitive user data at different levels of intrusiveness. As our second contribution, we introduce MIMiS, an unsupervised privacy-aware framework that clusters users in a given privacy surface configuration to homogeneous groups with respect to their temporal signature. In addition, we explore the trade-off between intrusiveness and prediction accuracy. MIMiS employs multi-set decomposition in order to deal with incompatible temporal granularities in user activities. We extensively evaluate MIMiS on real data. Across a variety of privacy surfaces, MIMiS identified groups that are highly homogeneous with respect to self-reported mental health scores. Finally, we conduct an in-depth exploration of the discovered clusters, identifying groups whose behavior is consistent with academic deadlines. | MIMiS: Minimally Intrusive Mining of Smartphone User Behaviors | 9,522 |
We analyse a huge and very precise trace of contact data collected by a network of sensors during 6 months on the entire population of a rehabilitation hospital. We investigate both the topological structure of the average daily link stream of contacts in the hospital and the temporal structure of the evolution of these contacts hour by hour. Our main results are to unveil striking properties of these two structures in the considered hospital, and to present a methodology that can be used for analysing any link stream where nodes are classified into groups. | The Link Stream of Contacts in a Whole Hospital | 9,523 |
The DeGroot model of naive social learning assumes that agents only communicate scalar opinions. In practice, agents communicate not only their opinions, but their confidence in such opinions. We propose a model that captures this aspect of communication by incorporating signal informativeness into the naive social learning scenario. Our proposed model captures aspects of both Bayesian and naive learning. Agents in our model combine their neighbors' beliefs using Bayes' rule, but the agents naively assume that their neighbors' beliefs are independent. Depending on the initial beliefs, agents in our model may not reach a consensus, but we show that the agents will reach a consensus under mild continuity and boundedness assumptions on initial beliefs. This eventual consensus can be explicitly computed in terms of each agent's centrality and signal informativeness, allowing joint effects to be precisely understood. We apply our theory to adoption of new technology. In contrast to Banerjee et al. [2018], we show that information about a new technology can be seeded initially in a tightly clustered group without information loss, but only if agents can expressively communicate their beliefs. | Naive Bayesian Learning in Social Networks | 9,524 |
Many dynamic networks coming from real-world contexts are link streams, i.e. a finite collection of triplets $(u,v,t)$ where $u$ and $v$ are two nodes having a link between them at time $t$. A very large number of studies on these objects start by aggregating the data in disjoint time windows of length $\Delta$ in order to obtain a series of graphs on which are made all subsequent analyses. Here we are concerned with the impact of the chosen $\Delta$ on the obtained graph series. We address the fundamental question of knowing whether a series of graphs formed using a given $\Delta$ faithfully describes the original link stream. We answer the question by showing that such dynamic networks exhibit a threshold for $\Delta$, which we call the \emph{saturation scale}, beyond which the properties of propagation of the link stream are altered, while they are mostly preserved before. We design an automatic method to determine the saturation scale of any link stream, which we apply and validate on several real-world datasets. | Non-Altering Time Scales for Aggregation of Dynamic Networks into Series
of Graphs | 9,525 |
Most of the existing multi-relational network embedding methods, e.g., TransE, are formulated to preserve pair-wise connectivity structures in the networks. With the observations that significant triangular connectivity structures and parallelogram connectivity structures found in many real multi-relational networks are often ignored and that a hard-constraint commonly adopted by most of the network embedding methods is inaccurate by design, we propose a novel representation learning model for multi-relational networks which can alleviate both fundamental limitations. Scalable learning algorithms are derived using the stochastic gradient descent algorithm and negative sampling. Extensive experiments on real multi-relational network datasets of WordNet and Freebase demonstrate the efficacy of the proposed model when compared with the state-of-the-art embedding methods. | A Structural Representation Learning for Multi-relational Networks | 9,526 |
In this paper, we address the problem of personalized next Point-of-interest (POI) recommendation which has become an important and very challenging task for location-based social networks (LBSNs), but not well studied yet. With the conjecture that, under different contextual scenarios, human exhibits distinct mobility pattern, we attempt here to jointly model the next POI recommendation under the influence of user's latent behavior pattern. We propose to adopt a third-rank tensor to model the successive check-in behaviors. By integrating categorical influence into mobility patterns and aggregating user's spatial preference on a POI, the proposed model deal with the next new POI recommendation problem by nature. By incorporating softmax function to fuse the personalized Markov chain with latent pattern, we furnish a Bayesian Personalized Ranking (BPR) approach and derive the optimization criterion accordingly. Expectation Maximization (EM) is then used to estimate the model parameters. We further develop a personalized model by taking into account personalized mobility patterns under the contextual scenario to improve the recommendation performance. Extensive experiments on two large-scale LBSNs datasets demonstrate the significant improvements of our model over several state-of-the-art methods. | Personalized Next Point-of-Interest Recommendation via Latent Behavior
Patterns Inference | 9,527 |
Although social networking has become a remarkable feature in the Web, full interoperability has not arrived. This work explores the main 5 paradigms of interoperability across social networking sites, corresponding to the layers in which we an find interoperability. Building on those, a novel analytical framework for SNS interoperability is introduced. Seven representative interoperability SNS technologies are compared using the proposed framework. The analysis exposes an overwhelming disparity and fragmentation in the solutions for tackling the same problems. Although there are a few solutions where consensus is reached and are widely adopted (e.g. in object IDs), there are multiple central issues that are still far from being widely standarized (e.g. in profile representation). In addition, several areas have been identified where there is clear room for improvement, such as privacy controls or data synchronization. | Understanding Federation: An Analytical Framework for the
Interoperability of Social Networking Sites | 9,528 |
From artificial intelligence to network security to hardware design, it is well-known that computing research drives many important technological and societal advancements. However, less is known about the long-term career paths of the people behind these innovations. What do their careers reveal about the evolution of computing research? Which institutions were and are the most important in this field, and for what reasons? Can insights into computing career trajectories help predict employer retention? In this paper we analyze several decades of post-PhD computing careers using a large new dataset rich with professional information, and propose a versatile career network model, R^3, that captures temporal career dynamics. With R^3 we track important organizations in computing research history, analyze career movement between industry, academia, and government, and build a powerful predictive model for individual career transitions. Our study, the first of its kind, is a starting point for understanding computing research careers, and may inform employer recruitment and retention mechanisms at a time when the demand for specialized computational expertise far exceeds supply. | Career Transitions and Trajectories: A Case Study in Computing | 9,529 |
Attributed network data is becoming increasingly common across fields, as we are often equipped with information about nodes in addition to their pairwise connectivity patterns. This extra information can manifest as a classification, or as a multidimensional vector of features. Recently developed methods that seek to extend community detection approaches to attributed networks have explored how to most effectively combine connectivity and attribute information to identify quality communities. These methods often rely on some assumption of the dependency relationships between attributes and connectivity. In this work, we seek to develop a statistical test to assess whether node attributes align with network connectivity. The objective is to quantitatively evaluate whether nodes with similar connectivity patterns also have similar attributes. To address this problem, we use a node sampling and label propagation approach. We apply our method to several synthetic examples that explore how network structure and attribute characteristics affect the empirical p-value computed by our method. Finally, we apply the test to a network generated from a single cell mass cytometry (CyTOF) dataset and show that our test can identify markers associated with distinct sub populations of single cells. | Testing Alignment of Node Attributes with Network Structure Through
Label Propagation | 9,530 |
Eigenvalues of a graph are of high interest in graph analytics for Big Data due to their relevance to many important properties of the graph including network resilience, community detection and the speed of viral propagation. Accurate computation of eigenvalues of extremely large graphs is usually not feasible due to the prohibitive computational and storage costs and also because full access to many social network graphs is often restricted to most researchers. In this paper, we present a series of new sampling algorithms which solve both of the above-mentioned problems and estimate the two largest eigenvalues of a large graph efficiently and with high accuracy. Unlike previous methods which try to extract a subgraph with the most influential nodes, our algorithms sample only a small portion of the large graph via a simple random walk, and arrive at estimates of the two largest eigenvalues by estimating the number of closed walks of a certain length. Our experimental results using real graphs show that our algorithms are substantially faster while also achieving significantly better accuracy on most graphs than the current state-of-the-art algorithms. | Closed Walk Sampler: An Efficient Method for Estimating Eigenvalues of
Large Graphs | 9,531 |
The advent of WWW changed the way we can produce and access information. Recent studies showed that users tend to select information that is consistent with their system of beliefs, forming polarized groups of like-minded people around shared narratives where dissenting information is ignored. In this environment, users cooperate to frame and reinforce their shared narrative making any attempt at debunking inefficient. Such a configuration occurs even in the consumption of news online, and considering that 63% of users access news directly form social media, one hypothesis is that more polarization allows for further spreading of misinformation. Along this path, we focus on the polarization of users around news outlets on Facebook in different European countries (Italy, France, Spain and Germany). First, we compare the pages' posting behavior and the users' interacting patterns across countries and observe different posting, liking and commenting rates. Second, we explore the tendency of users to interact with different pages (i.e., selective exposure) and the emergence of polarized communities generated around specific pages. Then, we introduce a new metric -- i.e., polarization rank -- to measure polarization of communities for each country. We find that Italy is the most polarized country, followed by France, Germany and lastly Spain. Finally, we present a variation of the Bounded Confidence Model to simulate the emergence of these communities by considering the users' engagement and trust on the news. Our findings suggest that trust in information broadcaster plays a pivotal role against polarization of users online. | Polarization Rank: A Study on European News Consumption on Facebook | 9,532 |
This paper explores the use of language models to predict 20 human traits from users' Facebook status updates. The data was collected by the myPersonality project, and includes user statuses along with their personality, gender, political identification, religion, race, satisfaction with life, IQ, self-disclosure, fair-mindedness, and belief in astrology. A single interpretable model meets state of the art results for well-studied tasks such as predicting gender and personality; and sets the standard on other traits such as IQ, sensational interests, political identity, and satisfaction with life. Additionally, highly weighted words are published for each trait. These lists are valuable for creating hypotheses about human behavior, as well as for understanding what information a model is extracting. Using performance and extracted features we analyze models built on social media. The real world problems we explore include gendered classification bias and Cambridge Analytica's use of psychographic models. | Inferring Human Traits From Facebook Statuses | 9,533 |
The Automobile Insurance Fraud is one of the main challenges for insurance companies. This form of fraud is performed either opportunistic or professional occurring through group cooperation that leads to greater financial losses, while most presented methods thus far are unsuited for flagging these groups. The article has put forward a new approach for identification, representation, and analysis of organized fraudulent groups in automobile insurance through focusing on structural aspects of networks, and cycles in particular, that demonstrate the occurrence of potential fraud. Suspicious groups have been detected by applying cycle detection algorithms (using both DFS, BFS trees), afterward, the probability of being fraudulent for suspicious components were investigated to reveal fraudulent groups with the maximum likelihood, and their reviews were prioritized. The actual data of Iran Insurance Company is used for evaluating the provided approach. As a result, the detection of cycles is not only more efficient, accurate, but also less time-consuming in comparison with previous methods for finding such groups. | The detection of professional fraud in automobile insurance using social
network analysis | 9,534 |
2018 started with massive protests in Iran, bringing back the impressions of the so called "Arab Spring" and it's revolutionary impact for the Maghreb states, Syria and Egypt. Many reports and scientific examinations considered online social networks (OSN's) such as Twitter or Facebook to play a critical role in the opinion making of people behind those protests. Beside that, there is also evidence for directed manipulation of opinion with the help of social bots and fake accounts. So, it is obvious to ask, if there is an attempt to manipulate the opinion-making process related to the Iranian protest in OSN by employing social bots, and how such manipulations will affect the discourse as a whole. Based on a sample of ca. 900,000 Tweets relating to the topic "Iran" we show, that there are Twitter profiles, that have to be considered as social bot accounts. By using text mining methods, we show that these social bots are responsible for negative sentiment in the debate. Thereby, we would like to illustrate a detectable effect of social bots on political discussions on Twitter. | Effects of Social Bots in the Iran-Debate on Twitter | 9,535 |
Networks are everywhere and their many types, including social networks, the Internet, food webs etc., have been studied for the last few decades. However, in real-world networks, it's hard to find examples that can be easily comparable, i.e. have the same density or even number of nodes and edges. We propose a flexible and extensible NetSim framework to understand how properties in different types of networks change with varying number of edges and vertices. Our approach enables to simulate three classical network models (random, small-world and scale-free) with easily adjustable model parameters and network size. To be able to compare different networks, for a single experimental setup we kept the number of edges and vertices fixed across the models. To understand how they change depending on the number of nodes and edges we ran over 30,000 simulations and analysed different network characteristics that cannot be derived analytically. Two of the main findings from the analysis are that the average shortest path does not change with the density of the scale-free network but changes for small-world and random networks; the apparent difference in mean betweenness centrality of the scale-free network compared with random and small-world networks. | NetSim -- The framework for complex network generator | 9,536 |
Comparison among graphs is ubiquitous in graph analytics. However, it is a hard task in terms of the expressiveness of the employed similarity measure and the efficiency of its computation. Ideally, graph comparison should be invariant to the order of nodes and the sizes of compared graphs, adaptive to the scale of graph patterns, and scalable. Unfortunately, these properties have not been addressed together. Graph comparisons still rely on direct approaches, graph kernels, or representation-based methods, which are all inefficient and impractical for large graph collections. In this paper, we propose the Network Laplacian Spectral Descriptor (NetLSD): the first, to our knowledge, permutation- and size-invariant, scale-adaptive, and efficiently computable graph representation method that allows for straightforward comparisons of large graphs. NetLSD extracts a compact signature that inherits the formal properties of the Laplacian spectrum, specifically its heat or wave kernel; thus, it hears the shape of a graph. Our evaluation on a variety of real-world graphs demonstrates that it outperforms previous works in both expressiveness and efficiency. | NetLSD: Hearing the Shape of a Graph | 9,537 |
We present a probabilistic model for learning from dynamic relational data, wherein the observed interactions among networked nodes are modeled via the Bernoulli Poisson link function, and the underlying network structure are characterized by nonnegative latent node-group memberships, which are assumed to be gamma distributed. The latent memberships evolve according to Markov processes. The optimal number of latent groups can be determined by data itself. The computational complexity of our method scales with the number of non-zero links, which makes it scalable to large sparse dynamic relational data. We present batch and online Gibbs sampling algorithms to perform model inference. Finally, we demonstrate the model's performance on both synthetic and real-world datasets compared to state-of-the-art methods. | A Poisson Gamma Probabilistic Model for Latent Node-group Memberships in
Dynamic Networks | 9,538 |
Embedding large graphs in low dimensional spaces has recently attracted significant interest due to its wide applications such as graph visualization, link prediction and node classification. Existing methods focus on computing the embedding for static graphs. However, many graphs in practical applications are dynamic and evolve constantly over time. Naively applying existing embedding algorithms to each snapshot of dynamic graphs independently usually leads to unsatisfactory performance in terms of stability, flexibility and efficiency. In this work, we present an efficient algorithm DynGEM based on recent advances in deep autoencoders for graph embeddings, to address this problem. The major advantages of DynGEM include: (1) the embedding is stable over time, (2) it can handle growing dynamic graphs, and (3) it has better running time than using static embedding methods on each snapshot of a dynamic graph. We test DynGEM on a variety of tasks including graph visualization, graph reconstruction, link prediction and anomaly detection (on both synthetic and real datasets). Experimental results demonstrate the superior stability and scalability of our approach. | DynGEM: Deep Embedding Method for Dynamic Graphs | 9,539 |
The richness of definitions and features of the community-detection problem has led to an impressive body of literature. In fact, many community-detection methods and surveys have been introduced in recent years. The goal here is to present a state-of-the-art of the most mature research in this area. We will therefore concentrate on non-overlapping community detection with the basic graph model. In this chapter we will give an overview of the most influential approaches to community detection that encompass most of the main methods and techniques. A special focus will also be given to community evaluation. | Non-overlapping community detection | 9,540 |
The dynamic monitoring of commuting flows is crucial for improving transit systems in fast-developing cities around the world. However, existing methodology to infer commuting originations and destinations have to either rely on large-scale survey data, which is inherently expensive to implement, or on Call Detail Records but based on ad-hoc heuristic assignment rules based on the frequency of appearance at given locations. In this paper, we proposed a novel method to accurately infer the point of origin and destinations of commuting flows based on individual's spatial-temporal patterns inferred from Call Detail Records. Our project significantly improves the accuracy upon the heuristic assignment rules popularly adopted in the literature. Starting with the historical data of geo-temporal travel patterns for a panel of individuals, we create, for each person-location, a vector of probability distribution capturing the likelihood that the person will appear in that location for a given the time of day. Stacked in this way, the matrix of historical geo-temporal data enables us to apply Eigen-decomposition and use unsupervised machine learning techniques to extract commonalities across locations for the different groups of travelers, which ultimately allows us to make inferences and create labels, such as home and work, on specific locations. Testing the methodology on real-world data with known location labels shows that our method identifies home and workplaces with significant accuracy, improving upon the most commonly used methods in the literature by 79% and 34%, respectively. Most importantly, our methodology does not bear any significant computation burden and is easily scalable and easily expanded to other real-world data with historical tracking. | Profiling presence patterns and segmenting user locations from cell
phone data | 9,541 |
This work extends the personalized PageRank model invented by Brin and Page to a family of PageRank models with various damping schemes. The goal with increased model variety is to capture or recognize a larger number of types of network activities, phenomena and propagation patterns. The response in PageRank distribution to variation in damping mechanism is then characterized analytically, and further estimated quantitatively on 6 large real-world link graphs. The study leads to new observation and empirical findings. It is found that the difference in the pattern of PageRank vector responding to parameter variation by each model among the 6 graphs is relatively smaller than the difference among 3 particular models used in the study on each of the graphs. This suggests the utility of model variety for differentiating network activities and propagation patterns. The quantitative analysis of the damping mechanisms over multiple damping models and parameters is facilitated by a highly efficient algorithm, which calculates all PageRank vectors at once via a commonly shared, spectrally invariant subspace. The spectral space is found to be of low dimension for each of the real-world graphs. | Damping Effect on PageRank Distribution | 9,542 |
Political trolls initiate online discord not only for the lulz (laughs) but also for ideological reasons, such as promoting their desired political candidates. Political troll groups recently gained spotlight because they were considered central in helping Donald Trump win the 2016 US presidential election, which involved difficult mass mobilizations. Political trolls face unique challenges as they must build their own communities while simultaneously disrupting others. However, little is known about how political trolls mobilize sufficient participation to suddenly become problems for others. We performed a quantitative longitudinal analysis of more than 16 million comments from one of the most popular and disruptive political trolling communities, the subreddit /r/The\_Donald (T\D). We use T_D as a lens to understand participation and collective action within these deviant spaces. In specific, we first study the characteristics of the most active participants to uncover what might drive their sustained participation. Next, we investigate how these active individuals mobilize their community to action. Through our analysis, we uncover that the most active employed distinct discursive strategies to mobilize participation, and deployed technical tools like bots to create a shared identity and sustain engagement. We conclude by providing data-backed design implications for designers of civic media. | Mobilizing the Trump Train: Understanding Collective Action in a
Political Trolling Community | 9,543 |
This paper focuses on the problem of scoring and ranking influential users of Instagram, a visual content sharing online social network (OSN). Instagram is the second largest OSN in the world with 700 million active Instagram accounts, 32% of all worldwide Internet users. Among the millions of users, photos shared by more influential users are viewed by more users than posts shared by less influential counterparts. This raises the question of how to identify those influential Instagram users. In our work, we present and discuss the lack of relevant tools and insufficient metrics for influence measurement, focusing on a network oblivious approach and show that the graph-based approach used in other OSNs is a poor fit for Instagram. In our study, we consider user statistics, some of which are more intuitive than others, and several regression models to measure users' influence. | Measuring Influence on Instagram: a Network-oblivious Approach | 9,544 |
Estimating revenue and business demand of a newly opened venue is paramount as these early stages often involve critical decisions such as first rounds of staffing and resource allocation. Traditionally, this estimation has been performed through coarse-grained measures such as observing numbers in local venues or venues at similar places (e.g., coffee shops around another station in the same city). The advent of crowdsourced data from devices and services carried by individuals on a daily basis has opened up the possibility of performing better predictions of temporal visitation patterns for locations and venues. In this paper, using mobility data from Foursquare, a location-centric platform, we treat venue categories as proxies for urban activities and analyze how they become popular over time. The main contribution of this work is a prediction framework able to use characteristic temporal signatures of places together with k-nearest neighbor metrics capturing similarities among urban regions, to forecast weekly popularity dynamics of a new venue establishment in a city neighborhood. We further show how we are able to forecast the popularity of the new venue after one month following its opening by using locality and temporal similarity as features. For the evaluation of our approach we focus on London. We show that temporally similar areas of the city can be successfully used as inputs of predictions of the visit patterns of new venues, with an improvement of 41% compared to a random selection of wards as a training set for the prediction task. We apply these concepts of temporally similar areas and locality to the real-time predictions related to new venues and show that these features can effectively be used to predict the future trends of a venue. Our findings have the potential to impact the design of location-based technologies and decisions made by new business owners. | Predicting the temporal activity patterns of new venues | 9,545 |
Commenting platforms, such as Disqus, have emerged as a major online communication platform with millions of users and posts. Their popularity has also attracted parasitic and malicious behav- iors, such as trolling and spamming. There has been relatively little research on modeling and safeguarding these platforms. As our key contribution, we develop a systematic approach to detect malicious users on commenting platforms focusing on having: (a) interpretable, and (b) fine-grained classification of malice. Our work has two key novelties: (a) we propose two classifications methods, with one following a two stage approach, which first maps observ- able features to behaviors and then maps these behaviors to user roles, and (b) we use a comprehensive set of 73 features that span four dimensions of information. We use 7 million comments during a 9 month period, and we show that our classification methods can distinguish between benign, and malicious roles (spammers, trollers, and fanatics) with a 0.904 AUC. Our work is a solid step to- wards ensuring that commenting platforms are a safe and pleasant medium for the exchange of ideas. | TrollSpot: Detecting misbehavior in commenting platforms | 9,546 |
The power of the press to shape the informational landscape of a population is unparalleled, even now in the era of democratic access to all information outlets. However, it is known that news outlets (particularly more traditional ones) tend to discriminate who they want to reach, and who to leave aside. In this work, we attempt to shed some light on the audience targeting patterns of newspapers, using the Chilean media ecosystem. First, we use the gravity model to analyze geography as a factor in explaining audience reachability. This shows that some newspapers are indeed driven by geographical factors (mostly local news outlets) but some others are not (national-distribution outlets). For those which are not, we use a regression model to study the influence of socioeconomic and political characteristics in news outlets adoption. We conclude that indeed larger, national-distribution news outlets target populations based on these factors, rather than on geography or immediacy. | Understanding News Outlets' Audience-Targeting Patterns | 9,547 |
Twitter has arguably been the most popular among the data sources that form the basis of so-called altmetrics. Tweets to scholarly documents have been heralded as both early indicators of citations as well as measures of societal impact. This chapter provides an overview of Twitter activity as the basis for scholarly metrics from a critical point of view and equally describes the potential and limitations of scholarly Twitter metrics. By reviewing the literature on Twitter in scholarly communication and analyzing 24 million tweets linking to scholarly documents, it aims to provide a basic understanding of what tweets can and cannot measure in the context of research evaluation. Going beyond the limited explanatory power of low correlations between tweets and citations, this chapter considers what types of scholarly documents are popular on Twitter, and how, when and by whom they are diffused in order to understand what tweets to scholarly documents measure. Although this chapter is not able to solve the problems associated with the creation of meaningful metrics from social media, it highlights particular issues and aims to provide the basis for advanced scholarly Twitter metrics. | Scholarly Twitter metrics | 9,548 |
Predictive analysis of social media data has attracted considerable attention from the research community as well as the business world because of the essential and actionable information it can provide. Over the years, extensive experimentation and analysis for insights have been carried out using Twitter data in various domains such as healthcare, public health, politics, social sciences, and demographics. In this chapter, we discuss techniques, approaches and state-of-the-art applications of predictive analysis of Twitter data. Specifically, we present fine-grained analysis involving aspects such as sentiment, emotion, and the use of domain knowledge in the coarse-grained analysis of Twitter data for making decisions and taking actions, and relate a few success stories. | Predictive Analysis on Twitter: Techniques and Applications | 9,549 |
Rumor detecting on microblogging platforms such as Sina Weibo is a crucial issue. Most existing rumor detecting algorithms require a lot of propagation data for model training, thus they do not have good detecting accuracy at the early stage after a rumor message is posted. In this paper, we propose to use gradient tree boosting (GTB) approach to rumor detecting, based on which a rumor detecting algorithm is developed. At the same time, the GTB-based approach makes it easy to conduct feature selection, and a feature selection algorithm is developed. Experiments on a widely used dataset of Sina Weibo show that the proposed detecting algorithm outperforms the state-of-the-art detecting algorithms; moreover, it has the highest detecting accuracy at the early stage. This work seems to be the first to use GTB-based approach in rumor detecting, and the results suggest that it may be a promising one. | A Gradient Tree Boosting based Approach to Rumor Detecting on Sina Weibo | 9,550 |
In this work, we investigate the structure and evolution of a peer-to-peer (P2P) payment application. A unique aspect of the network under consideration is that the edges among nodes represent financial transactions among individuals who shared an offline social interaction. Our dataset comes from Venmo, the most popular P2P mobile payment service. We present a series of static and dynamic measurements that summarize the key aspects of any social network, namely the degree distribution, density and connectivity. We find that the degree distributions do not follow a power-law distribution, confirming previous studies that real-world social networks are rarely scale-free. The giant component of Venmo is eventually composed of 99.9% of all nodes, and its clustering coefficient reaches 0.2. Last, we examine the "topological" version of the small-world hypothesis and find that Venmo users are separated by a mean of 5.9 steps and a median of 6 steps. | The Structure and Evolution of an Offline Peer-to-Peer Financial Network | 9,551 |
With 93% of pro-marijuana population in US favoring legalization of medical marijuana, high expectations of a greater return for Marijuana stocks, and public actively sharing information about medical, recreational and business aspects related to marijuana, it is no surprise that marijuana culture is thriving on Twitter. After the legalization of marijuana for recreational and medical purposes in 29 states, there has been a dramatic increase in the volume of drug-related communication on Twitter. Specifically, Twitter accounts have been established for promotional and informational purposes, some prominent among them being American Ganja, Medical Marijuana Exchange, and Cannabis Now. Identification and characterization of different user types can allow us to conduct more fine-grained spatiotemporal analysis to identify dominant or emerging topics in the echo chambers of marijuana-related communities on Twitter. In this research, we mainly focus on classifying Twitter accounts created and run by ordinary users, retailers, and informed agencies. Classifying user accounts by type can enable better capturing and highlighting of aspects such as trending topics, business profiling of marijuana companies, and state-specific marijuana policymaking. Furthermore, type-based analysis can provide more profound understanding and reliable assessment of the implications of marijuana-related communications. We developed a comprehensive approach to classifying users by their types on Twitter through contextualization of their marijuana-related conversations. We accomplished this using compositional multiview embedding synthesized from People, Content, and Network views achieving 8% improvement over the empirical baseline. | "What's ur type?" Contextualized Classification of User Types in
Marijuana-related Communications using Compositional Multiview Embedding | 9,552 |
Usage of emoji in social media platforms has seen a rapid increase over the last few years. Majority of the social media posts are laden with emoji and users often use more than one emoji in a single social media post to express their emotions and to emphasize certain words in a message. Utilizing the emoji co-occurrence can be helpful to understand how emoji are used in social media posts and their meanings in the context of social media posts. In this paper, we investigate whether emoji co-occurrences can be used as a feature to learn emoji embeddings which can be used in many downstream applications such sentiment analysis and emotion identification in social media text. We utilize 147 million tweets which have emojis in them and build an emoji co-occurrence network. Then, we train a network embedding model to embed emojis into a low dimensional vector space. We evaluate our embeddings using sentiment analysis and emoji similarity experiments, and experimental results show that our embeddings outperform the current state-of-the-art results for sentiment analysis tasks. | Learning Emoji Embeddings using Emoji Co-occurrence Network Graph | 9,553 |
The Internet has become a fundamental resource for activism as it facilitates political mobilization at a global scale. Petition platforms are a clear example of how thousands of people around the world can contribute to social change. Avaaz.org, with a presence in over 200 countries, is one of the most popular of this type. However, little research has focused on this platform, probably due to a lack of available data. In this work we retrieved more than 350K petitions, standardized their field values, and added new information using language detection and named-entity recognition. To motivate future research with this unique repository of global protest, we present a first exploration of the dataset. In particular, we examine how social media campaigning is related to the success of petitions, as well as some geographic and linguistic findings about the worldwide community of Avaaz.org. We conclude with example research questions that could be addressed with our dataset. | Online Petitioning Through Data Exploration and What We Found There: A
Dataset of Petitions from Avaaz.org | 9,554 |
Several centrality measures have been formulated to quantify the notion of 'importance' of actors in social networks. Current measures scrutinize either local or global connectivity of the nodes and have been found to be inadequate for social networks. Ignoring hierarchy and community structure, which are inherent in all human social networks, is the primary cause of this inadequacy. Positional hierarchy and embeddedness of an actor in the community are intuitively crucial determinants of his importance. The theory of social capital asserts that an actor's importance is derived from his position in network hierarchy as well as from the potential to mobilize resources through intra-community (bonding) and inter-community (bridging) ties. Inspired by this idea, we propose a novel centrality measure SC (Social Centrality) for actors in social networks. Our measure accounts for - i) an individual's propensity to socialize, and ii) his connections within and outside the community. These two factors are suitably aggregated to produce social centrality score. Comparative analysis of SC measure with classical and recent centrality measures using large public networks shows that it consistently produces a more realistic ranking of nodes. The inference is based on the available ground truth for each tested networks. Extensive analysis of rankings delivered by SC measure and mapping with known facts in well-studied networks justifies its effectiveness in diverse social networks. Scalability evaluation of SC measure justifies its efficacy for real-world large networks. | Social Centrality using Network Hierarchy and Community Structure | 9,555 |
Twitter has increasingly become a popular platform to share news and user opinion. A tweet is considered to be important if it receives high number of affirmative reactions from other Twitter users via Retweets. Retweet count is thus considered as a surrogate measure for positive crowd-sourced reactions - high number of retweets of a tweet aid in making its topic trending. This in turn bolsters the social reputation of the author of the tweet. Since social reputation/impact of users/tweets influences many decisions (such as promoting brands, advertisement etc.), several blackmarket syndicates have actively been engaged in producing fake retweets in a collusive manner. Users who want to boost the impact of their tweets approach the blackmarket services, and gain retweets for their own tweets by either paying money (Premium Services) or by retweeting other customers' tweets. Thus they become customers of blackmarket syndicates and engage in fake activities. Interestingly, these customers are neither bots, nor even fake users - they are usually normal human beings; they express a mix of organic and inorganic retweeting activities, and there is no synchronicity across their behaviors. In this paper, we make a first attempt to investigate such blackmarket customers engaged in producing fake retweets. We collected and annotated a novel dataset comprising of customers of many blackmarket services and show how their social behavior differs from genuine users. We then use state-of-the-art supervised models to detect three types of customers (bots, promotional, normal) and genuine users. We achieve a Macro F1-score of 0.87 with SVM, outperforming four other baselines significantly. We further design a browser extension, SCoRe which, given the link of a tweet, spots its fake retweeters in real-time. We also collected users' feedback on the performance of SCoRe and obtained 85% accuracy. | Retweet Us, We Will Retweet You: Spotting Collusive Retweeters Involved
in Blackmarket Services | 9,556 |
The LinkedIn Salary product was launched in late 2016 with the goal of providing insights on compensation distribution to job seekers, so that they can make more informed decisions when discovering and assessing career opportunities. The compensation insights are provided based on data collected from LinkedIn members and aggregated in a privacy-preserving manner. Given the simultaneous desire for computing robust, reliable insights and for having insights to satisfy as many job seekers as possible, a key challenge is to reliably infer the insights at the company level when there is limited or no data at all. We propose a two-step framework that utilizes a novel, semantic representation of companies (Company2vec) and a Bayesian statistical model to address this problem. Our approach makes use of the rich information present in the LinkedIn Economic Graph, and in particular, uses the intuition that two companies are likely to be similar if employees are very likely to transition from one company to the other and vice versa. We compute embeddings for companies by analyzing the LinkedIn members' company transition data using machine learning algorithms, then compute pairwise similarities between companies based on these embeddings, and finally incorporate company similarities in the form of peer company groups as part of the proposed Bayesian statistical model to predict insights at the company level. We perform extensive validation using several different evaluation techniques, and show that we can significantly increase the coverage of insights while, in fact, even improving the quality of the obtained insights. For example, we were able to compute salary insights for 35 times as many title-region-company combinations in the U.S. as compared to previous work, corresponding to 4.9 times as many monthly active users. Finally, we highlight the lessons learned from deployment of our system. | How LinkedIn Economic Graph Bonds Information and Product: Applications
in LinkedIn Salary | 9,557 |
In the election, political parties communicate political information to people through social media. The followers receive the information, but can users who are not followers, political detachment users, receive the information? We focus on political detachment users who do not follow any political parties, and tackle the following research question: do political detachment users receive various political information during the election period? The results indicate that the answer is No. We determined that the political detachment users only receive the information of a few political parties. | Do Political Detachment Users Receive Various Political Information on
Social Media? | 9,558 |
Social Live Stream Services (SLSS) exploit a new level of social interaction. One of the main challenges in these services is how to detect and prevent deviant behaviors that violate community guidelines. In this work, we focus on adult content production and consumption in two widely used SLSS, namely Live.me and Loops Live, which have millions of users producing massive amounts of video content on a daily basis. We use a pre-trained deep learning model to identify broadcasters of adult content. Our results indicate that moderation systems in place are highly ineffective in suspending the accounts of such users. We create two large datasets by crawling the social graphs of these platforms, which we analyze to identify characterizing traits of adult content producers and consumers, and discover interesting patterns of relationships among them, evident in both networks. | Adult content in Social Live Streaming Services: Characterizing deviant
users and relationships | 9,559 |
Cryptocurrencies have recently experienced a new wave of price volatility and interest; activity within social media communities relating to cryptocurrencies has increased significantly. There is currently limited documented knowledge of factors which could indicate future price movements. This paper aims to decipher relationships between cryptocurrency price changes and topic discussion on social media to provide, among other things, an understanding of which topics are indicative of future price movements. To achieve this a well-known dynamic topic modelling approach is applied to social media communication to retrieve information about the temporal occurrence of various topics. A Hawkes model is then applied to find interactions between topics and cryptocurrency prices. The results show particular topics tend to precede certain types of price movements, for example the discussion of 'risk and investment vs trading' being indicative of price falls, the discussion of 'substantial price movements' being indicative of volatility, and the discussion of 'fundamental cryptocurrency value' by technical communities being indicative of price rises. The knowledge of topic relationships gained here could be built into a real-time system, providing trading or alerting signals. | Mutual-Excitation of Cryptocurrency Market Returns and Social Media
Topics | 9,560 |
Online social networks (OSN) are one of the most popular forms of modern communication and among the best known is Facebook. Information about the connection between users on the OSN is often very scarce. It's only known if users are connected, while the intensity of the connection is unknown. The aim of the research described was to determine and quantify friendship intensity between OSN users based on analysis of their interaction. We built a mathematical model, which uses: supervised machine learning algorithm Random Forest, experimentally determined importance of communication parameters and coefficients for every interaction parameter based on answers of research conducted through a survey. Taking user opinion into consideration while designing a model for calculation of friendship intensity is a novel approach in opposition to previous researches from literature. Accuracy of the proposed model was verified on the example of determining a better friend in the offered pair. | Determination of Friendship Intensity between Online Social Network
Users Based on Their Interaction | 9,561 |
Online social networks (OSNs) are abused by cyber criminals for various malicious activities. One of the most effective approaches for detecting malicious activity in OSNs involves the use of social network honeypots - artificial profiles that are deliberately planted within OSNs in order to attract abusers. Honeypot profiles have been used in detecting spammers, potential cyber attackers, and advanced attackers. Therefore, there is a growing need for the ability to reliably generate realistic artificial honeypot profiles in OSNs. In this research we present 'ProfileGen' - a method for the automated generation of profiles for professional social networks, giving particular attention to producing realistic education and employment records. 'ProfileGen' creates honeypot profiles that are similar to actual data by extrapolating the characteristics and properties of real data items. Evaluation by 70 domain experts confirms the method's ability to generate realistic artificial profiles that are indistinguishable from real profiles, demonstrating that our method can be applied to generate realistic artificial profiles for a wide range of applications. | Generation of Automatic and Realistic Artificial Profiles | 9,562 |
For urban governments, introducing policies has long been adopted as a main approach to instigate regeneration processes, and to promote social mixing and vitality within the city. However, due to the absence of large fine-grained datasets, the effects of these policies have been historically hard to evaluate. In this research, we illustrate how a combination of large-scale datasets, the Index of Deprivation and Foursquare data (an online geo-social network service) could be used to investigate the impact of the 2012 Olympic Games on the regeneration of East London neighbourhoods. We study and quantify both the physical and socio-economic aspects of this, where our empirical findings suggest that the target areas did indeed undergo regeneration after the Olympic project in some ways. In general, the growth rate of Foursquare venue density in Olympic host boroughs is higher than the city's average level since the preparation period of the Games and up to two years after the event. Furthermore, the deprivation levels in East London boroughs also saw improvements in various aspects after the Olympic Games. One negative outcome we notice is that the housing affordability becomes even more of an issue in East London areas with the regeneration gradually unfolding. | Evaluating the impact of the 2012 Olympic Games policy on the
regeneration of East London using spatio-temporal big data | 9,563 |
The ever-increasing amount of multimedia content on modern social media platforms are valuable in many applications. While the openness and convenience features of social media also foster many rumors online. Without verification, these rumors would reach thousands of users immediately and cause serious damages. Many efforts have been taken to defeat online rumors automatically by mining the rich content provided on the open network with machine learning techniques. Most rumor detection methods can be categorized in three paradigms: the hand-crafted features based classification approaches, the propagation-based approaches and the neural networks approaches. In this survey, we introduce a formal definition of rumor in comparison with other definitions used in literatures. We summary the studies of automatic rumor detection so far and present details in three paradigms of rumor detection. We also give an introduction on existing datasets for rumor detection which would benefit following researches in this area. We give our suggestions for future rumors detection on microblogs as a conclusion. | Automatic Rumor Detection on Microblogs: A Survey | 9,564 |
Community detection on social media has attracted considerable attention for many years. However, existing methods do not reveal the relations between communities. Communities can form alliances or engage in antagonisms due to various factors, e.g., shared or conflicting goals and values. Uncovering such relations can provide better insights to understand communities and the structure of social media. According to social science findings, the attitudes that members from different communities express towards each other are largely shaped by their community membership. Hence, we hypothesize that inter-community attitudes expressed among users in social media have the potential to reflect their inter-community relations. Therefore, we first validate this hypothesis in the context of social media. Then, inspired by the hypothesis, we develop a framework to detect communities and their relations by jointly modeling users' attitudes and social interactions. We present experimental results using three real-world social media datasets to demonstrate the efficacy of our framework. | Detecting Antagonistic and Allied Communities on Social Media | 9,565 |
In the recent years, we have witnessed the rapid adoption of social media platforms, such as Twitter, Facebook and YouTube, and their use as part of the everyday life of billions of people worldwide. Given the habit of people to use these platforms to share thoughts, daily activities and experiences it is not surprising that the amount of user generated content has reached unprecedented levels, with a substantial part of that content being related to real-world events, i.e. actions or occurrences taking place at a certain time and location. Given the key role of events in our life, the task of annotating and organizing social media content around them is of crucial importance for ensuring real-time and future access to multimedia content about an event of interest. In this chapter, we present several research efforts from recent years that tackle two main problems: a) event detection and b) event-based media retrieval and summarization. Given archived collections or live streams of social media items, the purpose of event detection methods is to identify previously unknown events in the form of sets of items that describe them. In general, the events could be of any type, but there are also approaches aiming at events of specific type. Given a target event the goal of event summarization is first to identify relevant content and then to represent it in a concise way, selecting the most appealing and representative content. | Event Detection and Retrieval on Social Media | 9,566 |
The moderation of content in many social media systems, such as Twitter and Facebook, motivated the emergence of a new social network system that promotes free speech, named Gab. Soon after that, Gab has been removed from Google Play Store for violating the company's hate speech policy and it has been rejected by Apple for similar reasons. In this paper we characterize Gab, aiming at understanding who are the users who joined it and what kind of content they share in this system. Our findings show that Gab is a very politically oriented system that hosts banned users from other social networks, some of them due to possible cases of hate speech and association with extremism. We provide the first measurement of news dissemination inside a right-leaning echo chamber, investigating a social media where readers are rarely exposed to content that cuts across ideological lines, but rather are fed with content that reinforces their current political or social views. | Inside the Right-Leaning Echo Chambers: Characterizing Gab, an
Unmoderated Social System | 9,567 |
This paper explores the relations between social ties and cultural constructs in small groups. The analysis uses cross-sectional data comprising both social networks within three art groups and semantic networks based on verbal expressions of their members. We examine how positions of actors in the intragroup social networks associate with the properties of cultural constructs they create jointly with other group members accounting for different roles actors play in collective culture constructing. We find that social popularity rather hinders sharing of cultural concepts, while those individuals who socially bridge their groups come to share many concepts with others. Moreover, focusing and, especially, integration of cultural constructs, rather than mere thickness of those, accompany intense interactions between the leaders and the followers. | Social Networks and Construction of Culture: A Socio-Semantic Analysis
of Art Groups | 9,568 |
The communication devices have produced digital traces for their users either voluntarily or not. This type of collective data can give powerful indications that are affecting the urban systems design and development. In this study mobile phone data during Armada event is investigated. Analyzing mobile phone traces gives conceptual views about individuals densities and their mobility patterns in the urban city. The geo-visualization and statistical techniques have been used for understanding human mobility collectively and individually. The undertaken substantial parameters are inter-event times, travel distances (displacements) and radius of gyration. They have been analyzed and simulated using computing platform by integrating various applications for huge database management, visualization, analysis, and simulation. Accordingly, the general population pattern law has been extracted. The study contribution outcomes have revealed both the individuals densities in static perspective and individuals mobility in dynamic perspective with multi levels of abstraction (macroscopic, mesoscopic, microscopic). | Adaptive modeling of urban dynamics during ephemeral event via mobile
phone traces | 9,569 |
The research objectives are exploring characteristics of human mobility patterns, subsequently modelling them mathematically depending on inter-event time and traveled distances parameters using CDRs (Call Detailed Records). The observations are obtained from Armada festival in France. Understanding, modelling and simulating human mobility among urban regions is excitement approach, due to itsimportance in rescue situations for various events either indoor events like evacuation of buildings or outdoor ones like public assemblies,community evacuation in casesemerged during emergency situations, moreover serves urban planning and smart cities. | Human Mobility Patterns Modelling using CDRs | 9,570 |
Society's reliance on social media as a primary source of news has spawned a renewed focus on the spread of misinformation. In this work, we identify the differences in how social media accounts identified as bots react to news sources of varying credibility, regardless of the veracity of the content those sources have shared. We analyze bot and human responses annotated using a fine-grained model that labels responses as being an answer, appreciation, agreement, disagreement, an elaboration, humor, or a negative reaction. We present key findings of our analysis into the prevalence of bots, the variety and speed of bot and human reactions, and the disparity in authorship of reaction tweets between these two sub-populations. We observe that bots are responsible for 9-15% of the reactions to sources of any given type but comprise only 7-10% of accounts responsible for reaction-tweets; trusted news sources have the highest proportion of humans who reacted; bots respond with significantly shorter delays than humans when posting answer-reactions in response to sources identified as propaganda. Finally, we report significantly different inequality levels in reaction rates for accounts identified as bots vs not. | How Humans versus Bots React to Deceptive and Trusted News Sources: A
Case Study of Active Users | 9,571 |
Communication devices (mobile networks, social media platforms) are produced digital traces for their users either voluntarily or not. This type of collective data can give powerful indications on their effect on urban systems design and development. For understanding the collective human behavior of urban city, the modeling techniques could be used. In this study the most important feature of human mobility is considered, which is the radius of gyration . This parameter is used to measure how (far /frequent) the individuals are shift inside specific observed region. | Human Trajectories Characteristics | 9,572 |
Interactional synchrony refers to how the speech or behavior of two or more people involved in a conversation become more finely synchronized with each other, and they can appear to behave almost in direct response to one another. Studies have shown that interactional synchrony is a hallmark of relationships, and is produced as a result of rapport. %Research has also shown that up to two-thirds of human communication occurs via nonverbal channels such as gestures (or body movements), facial expressions, \etc. In this work, we use computer vision based methods to extract nonverbal cues, specifically from the face, and develop a model to measure interactional synchrony based on those cues. This paper illustrates a novel method of constructing a dynamic deep neural architecture, specifically made up of intermediary long short-term memory networks (LSTMs), useful for learning and predicting the extent of synchrony between two or more processes, by emulating the nonlinear dependencies between them. On a synthetic dataset, where pairs of sequences were generated from a Gaussian process with known covariates, the architecture could successfully determine the covariance values of the generating process within an error of 0.5% when tested on 100 pairs of interacting signals. On a real-life dataset involving groups of three people, the model successfully estimated the extent of synchrony of each group on a scale of 1 to 5, with an overall prediction mean of $2.96%$ error when performing 5-fold validation, as compared to 26.1% on the random permutations serving as the control baseline. | Computational Social Dynamics: Analyzing the Face-level Interactions in
a Group | 9,573 |
The proliferation of information disseminated by public/social media has made decision-making highly challenging due to the wide availability of noisy, uncertain, or unverified information. Although the issue of uncertainty in information has been studied for several decades, little work has investigated how noisy (or uncertain) or valuable (or credible) information can be formulated into people's opinions, modeling uncertainty both in the quantity and quality of evidence leading to a specific opinion. In this work, we model and analyze an opinion and information model by using Subjective Logic where the initial set of evidence is mixed with different types of evidence (i.e., pro vs. con or noisy vs. valuable) which is incorporated into the opinions of original propagators, who propagate information over a network. With the help of an extensive simulation study, we examine how the different ratios of information types or agents' prior belief or topic competence affect the overall information diffusion. Based on our findings, agents' high uncertainty is not necessarily always bad in making a right decision as long as they are competent enough not to be at least biased towards false information (e.g., neutral between two extremes). | Is Uncertainty Always Bad?: Effect of Topic Competence on Uncertain
Opinions | 9,574 |
On June 24, 2018, Turkey held a historical election, transforming its parliamentary system to a presidential one. One of the main questions for Turkish voters was whether to start this new political era with reelecting its long-time political leader Recep Tayyip Erdogan or not. In this paper, we analyzed 108M tweets posted in the two months leading to the election to understand the groups that supported or opposed Erdogan's reelection. We examined the most distinguishing hashtags and retweeted accounts for both groups. Our findings indicate strong polarization between both groups as they differ in terms of ideology, news sources they follow, and preferred TV entertainment. | Devam vs. Tamam: 2018 Turkish Elections | 9,575 |
According to tastes, a person could show preference for a given category of content to a greater or lesser extent. However, quantifying people's amount of interest in a certain topic is a challenging task, especially considering the massive digital information they are exposed to. For example, in the context of Twitter, aligned with his/her preferences a user may tweet and retweet more about technology than sports and do not share any music-related content. The problem we address in this paper is the identification of users' implicit topic preferences by analyzing the content categories they tend to post on Twitter. Our proposal is significant given that modeling their multi-topic profile may be useful to find patterns or association between preferences for categories, discover trending topics and cluster similar users to generate better group recommendations of content. In the present work, we propose a method based on the Mixed Gaussian Model to extract the multidimensional preference representation for 399 Ecuadorian tweeters concerning twenty-two different topics (or dimensions) which became known by manually categorizing 68.186 tweets. Our experiment findings indicate that the proposed approach is effective at detecting the topic interests of users. | What kind of content are you prone to tweet? Multi-topic Preference
Model for Tweeters | 9,576 |
Facebook News Feed personalization algorithm has a significant impact, on a daily basis, on the lifestyle, mood and opinion of millions of Internet users. Nonetheless, the behavior of such algorithms usually lacks transparency, motivating measurements, modeling and analysis in order to understand and improve its properties. In this paper, we propose a reproducible methodology encompassing measurements and an analytical model to capture the visibility of publishers over a News Feed. First, measurements are used to parameterize and to validate the expressive power of the proposed model. Then, we conduct a what-if analysis to assess the visibility bias incurred by the users against a baseline derived from the model. Our results indicate that a significant bias exists and it is more prominent at the top position of the News Feed. In addition, we found that the bias is non-negligible even for users that are deliberately set as neutral with respect to their political views. | Biases in the Facebook News Feed: a Case Study on the Italian Elections | 9,577 |
Citation network analysis has become one of methods to study how scientific knowledge flows from one domain to another. Health informatics is a multidisciplinary field that includes social science, software engineering, behavioral science, medical science and others. In this study, we perform an analysis of citation statistics from health informatics journals using data set extracted from CrossRef. For each health informatics journal, we extract the number of citations from/to studies related to computer science, medicine/clinical medicine and other fields, including the number of self-citations from the health informatics journal. With a similar number of articles used in our analysis, we show that the Journal of the American Medical Informatics Association (JAMIA) has more in-citations than the Journal of Medical Internet Research (JMIR); while JMIR has a higher number of out-citations and self-citations. We also show that JMIR cites more articles from health informatics journals and medicine related journals. In addition, the Journal of Medical Systems (JMS) cites more articles from computer science journals compared with other health informatics journals included in our analysis. | Characterizing health informatics journals by subject-level
dependencies: a citation network analysis | 9,578 |
Ensemble learning for anomaly detection of data structured into complex network has been barely studied due to the inconsistent performance of complex network characteristics and lack of inherent objective function. In this paper, we propose the IFSAD, a new two-phase ensemble method for anomaly detection based on intuitionistic fuzzy set, and applies it to the abnormal behavior detection problem in temporal complex networks. First, it constructs the intuitionistic fuzzy set of single network characteristic which quantifies the degree of membership, non-membership and hesitation of each of network characteristic to the defined linguistic variables so that makes the unuseful or noise characteristics become part of the detection. To build an objective intuitionistic fuzzy relationship, we propose an Gaussian distribution-based membership function which gives a variable hesitation degree. Then, for the fuzzification of multiple network characteristics, the intuitionistic fuzzy weighted geometric operator is adopted to fuse multiple IFSs and to avoid the inconsistent of multiple characteristics. Finally, the score function and precision function are used to sort the fused IFS. Finally we carried out extensive experiments on several complex network datasets for anomaly detection, and the results demonstrate the superiority of our method to state-of-the-art approaches, validating the effectiveness of our method. | Anomaly Detection of Complex Networks Based on Intuitionistic Fuzzy Set
Ensemble | 9,579 |
Analyzing job hopping behavior is important for understanding job preference and career progression of working individuals. When analyzed at the workforce population level, job hop analysis helps to gain insights of talent flow among different jobs and organizations. Traditionally, surveys are conducted on job seekers and employers to study job hop behavior. Beyond surveys, job hop behavior can also be studied in a highly scalable and timely manner using a data driven approach in response to fast-changing job landscape. Fortunately, the advent of online professional networks (OPNs) has made it possible to perform a large-scale analysis of talent flow. In this paper, we present a new data analytics framework to analyze the talent flow patterns of close to 1 million working professionals from three different countries/regions using their publicly-accessible profiles in an established OPN. As OPN data are originally generated for professional networking applications, our proposed framework re-purposes the same data for a different analytics task. Prior to performing job hop analysis, we devise a job title normalization procedure to mitigate the amount of noise in the OPN data. We then devise several metrics to measure the amount of work experience required to take up a job, to determine that existence duration of the job (also known as the job age), and the correlation between the above metric and propensity of hopping. We also study how job hop behavior is related to job promotion/demotion. Lastly, we perform connectivity analysis at job and organization levels to derive insights on talent flow as well as job and organizational competitiveness. | Talent Flow Analytics in Online Professional Network | 9,580 |
We consider the problem of obtaining unbiased estimates of group properties in social networks when using a classifier for node labels. Inference for this problem is complicated by two factors: the network is not known and must be crawled, and even high-performance classifiers provide biased estimates of group proportions. We propose and evaluate AdjustedWalk for addressing this problem. This is a three step procedure which entails: 1) walking the graph starting from an arbitrary node; 2) learning a classifier on the nodes in the walk; and 3) applying a post-hoc adjustment to classification labels. The walk step provides the information necessary to make inferences over the nodes and edges, while the adjustment step corrects for classifier bias in estimating group proportions. This process provides de-biased estimates at the cost of additional variance. We evaluate AdjustedWalk on four tasks: the proportion of nodes belonging to a minority group, the proportion of the minority group among high degree nodes, the proportion of within-group edges, and Coleman's homophily index. Simulated and empirical graphs show that this procedure performs well compared to optimal baselines in a variety of circumstances, while indicating that variance increases can be large for low-recall classifiers. | Estimating group properties in online social networks with a classifier | 9,581 |
Studies of affect labeling, i.e. putting your feelings into words, indicate that it can attenuate positive and negative emotions. Here we track the evolution of individual emotions for tens of thousands of Twitter users by analyzing the emotional content of their tweets before and after they explicitly report having a strong emotion. Our results reveal how emotions and their expression evolve at the temporal resolution of one minute. While the expression of positive emotions is preceded by a short but steep increase in positive valence and followed by short decay to normal levels, negative emotions build up more slowly, followed by a sharp reversal to previous levels, matching earlier findings of the attenuating effects of affect labeling. We estimate that positive and negative emotions last approximately 1.25 and 1.5 hours from onset to evanescence. A separate analysis for male and female subjects is suggestive of possible gender-specific differences in emotional dynamics. | Does putting your emotions into words make you feel better? Measuring
the minute-scale dynamics of emotions from online data | 9,582 |
Users online tend to acquire information adhering to their system of beliefs and to ignore dissenting information. Such dynamics might affect page popularity. In this paper we introduce an algorithm, that we call PopRank, to assess both the Impact of Facebook pages as well as users' Engagement on the basis of their mutual interactions. The ideas behind the PopRank are that i) high impact pages attract many users with a low engagement, which means that they receive comments from users that rarely comment, and ii) high engagement users interact with high impact pages, that is they mostly comment pages with a high popularity. The resulting ranking of pages can predict the number of comments a page will receive and the number of its posts. Pages impact turns out to be slightly dependent on pages' informative content (e.g., science vs conspiracy) but independent of users' polarization. | PopRank: Ranking pages' impact and users' engagement on Facebook | 9,583 |
The widespread use of big social data has pointed the research community in several significant directions. In particular, the notion of social trust has attracted a great deal of attention from information processors | computer scientists and information consumers | formal organizations. This is evident in various applications such as recommendation systems, viral marketing and expertise retrieval. Hence, it is essential to have frameworks that can temporally measure users credibility in all domains categorised under big social data. This paper presents CredSaT (Credibility incorporating Semantic analysis and Temporal factor): a fine-grained users credibility analysis framework for big social data. A novel metric that includes both new and current features, as well as the temporal factor, is harnessed to establish the credibility ranking of users. Experiments on real-world dataset demonstrate the effectiveness and applicability of our model to indicate highly domain-based trustworthy users. Further, CredSaT shows the capacity in capturing spammers and other anomalous users. | CredSaT: Credibility Ranking of Users in Big Social Data incorporating
Semantic Analysis and Temporal Factor | 9,584 |
Network embedding methods aim at learning low-dimensional latent representation of nodes in a network. These representations can be used as features for a wide range of tasks on graphs such as classification, clustering, link prediction, and visualization. In this survey, we give an overview of network embeddings by summarizing and categorizing recent advancements in this research field. We first discuss the desirable properties of network embeddings and briefly introduce the history of network embedding algorithms. Then, we discuss network embedding methods under different scenarios, such as supervised versus unsupervised learning, learning embeddings for homogeneous networks versus for heterogeneous networks, etc. We further demonstrate the applications of network embeddings, and conclude the survey with future work in this area. | A Tutorial on Network Embeddings | 9,585 |
The Internet provides students with a unique opportunity to connect and maintain social ties with peers from other schools, irrespective of how far they are from each other. However, little is known about the real structure of such online relationships. In this paper, we investigate the structure of interschool friendship on a popular social networking site. We use data from 36,951 students from 590 schools of a large European city. We find that the probability of a friendship tie between students from neighboring schools is high and that it decreases with the distance between schools following the power law. We also find that students are more likely to be connected if the educational outcomes of their schools are similar. We show that this fact is not a consequence of residential segregation. While high- and low-performing schools are evenly distributed across the city, this is not the case for the digital space, where schools turn out to be segregated by educational outcomes. There is no significant correlation between the educational outcomes of a school and its geographical neighbors; however, there is a strong correlation between the educational outcomes of a school and its digital neighbors. These results challenge the common assumption that the Internet is a borderless space, and may have important implications for the understanding of educational inequality in the digital age. | Schools are segregated by educational outcomes in the digital space | 9,586 |
Anti-social behaviors in social media can happen both at user and community levels. While a great deal of attention is on the individual as an 'aggressor,' the banning of entire Reddit subcommunities (i.e., subreddits) demonstrates that this is a multi-layer concern. Existing research on inter-community conflict has largely focused on specific subcommunities or ideological opponents. However, antagonistic behaviors may be more pervasive and integrate into the broader network. In this work, we study the landscape of conflicts among subreddits by deriving higher-level (community) behaviors from the way individuals are sanctioned and rewarded. By constructing a conflict network, we characterize different patterns in subreddit-to-subreddit conflicts as well as communities of 'co-targeted' subreddits. By analyzing the dynamics of these interactions, we also observe that the conflict focus shifts over time. | Extracting Inter-community Conflicts in Reddit | 9,587 |
Hate content in social media is ever-increasing. While Facebook, Twitter, Google have attempted to take several steps to tackle the hateful content, they have mostly been unsuccessful. Counterspeech is seen as an effective way of tackling the online hate without any harm to the freedom of speech. Thus, an alternative strategy for these platforms could be to promote counterspeech as a defense against hate content. However, in order to have a successful promotion of such counterspeech, one has to have a deep understanding of its dynamics in the online world. Lack of carefully curated data largely inhibits such understanding. In this paper, we create and release the first ever dataset for counterspeech using comments from YouTube. The data contains 13,924 manually annotated comments where the labels indicate whether a comment is a counterspeech or not. This data allows us to perform a rigorous measurement study characterizing the linguistic structure of counterspeech for the first time. This analysis results in various interesting insights such as: the counterspeech comments receive much more likes as compared to the non-counterspeech comments, for certain communities majority of the non-counterspeech comments tend to be hate speech, the different types of counterspeech are not all equally effective and the language choice of users posting counterspeech is largely different from those posting non-counterspeech as revealed by a detailed psycholinguistic analysis. Finally, we build a set of machine learning models that are able to automatically detect counterspeech in YouTube videos with an F1-score of 0.71. We also build multilabel models that can detect different types of counterspeech in a comment with an F1-score of 0.60. | Thou shalt not hate: Countering Online Hate Speech | 9,588 |
Sampling a network is an important prerequisite for unsupervised network embedding. Further, random walk has widely been used for sampling in previous studies. Since random walk based sampling tends to traverse adjacent neighbors, it may not be suitable for heterogeneous network because in heterogeneous networks two adjacent nodes often belong to different types. Therefore, this paper proposes a K-hop random walk based sampling approach which includes a node in the sample list only if it is separated by K hops from the source node. We exploit the samples generated using K-hop random walker for network embedding using skip-gram model (word2vec). Thereafter, the performance of network embedding is evaluated on co-authorship prediction task in heterogeneous DBLP network. We compare the efficacy of network embedding exploiting proposed sampling approach with recently proposed best performing network embedding models namely, Metapath2vec and Node2vec. It is evident that the proposed sampling approach yields better quality of embeddings and out-performs baselines in majority of the cases. | Network Sampling Using K-hop Random Walks for Heterogeneous Network
Embedding | 9,589 |
In recent time, applications of network embedding in mining real-world information network have been widely reported in the literature. Majority of the information networks are heterogeneous in nature. Meta-path is one of the popularly used approaches for generating embedding in heterogeneous networks. As meta-path guides the models towards a specific sub-structure, it tends to lose some hetero- geneous characteristics inherently present in the underlying network. In this paper, we systematically study the effects of different meta-paths using different state-of-art network embedding methods (Metapath2vec, Node2vec, and VERSE) over DBLP bibliographic network and evaluate the performance of embeddings using two applications (co-authorship prediction and authors research area classification tasks). From various experimental observations, it is evident that embedding using different meta-paths perform differently over different tasks. It shows that meta- paths are task-dependent and can not be generalized for different tasks. We further observe that embedding obtained after considering all the node and relation types in bibliographic network outperforms its meta- path based counterparts. | On Applying Meta-path for Network Embedding in Mining Heterogeneous DBLP
Network | 9,590 |
Given a social network with diffusion probabilities as edge weights and an integer k, which k nodes should be chosen for initial injection of information to maximize influence in the network? This problem is known as Target Set Selection in a social network (TSS Problem) and more popularly, Social Influence Maximization Problem (SIM Problem). This is an active area of research in computational social network analysis domain since one and half decades or so. Due to its practical importance in various domains, such as viral marketing, target advertisement, personalized recommendation, the problem has been studied in different variants, and different solution methodologies have been proposed over the years. Hence, there is a need for an organized and comprehensive review on this topic. This paper presents a survey on the progress in and around TSS Problem. At last, it discusses current research trends and future research directions as well. | A Survey on Influence Maximization in a Social Network | 9,591 |
Convolutional neural networks (CNNs) leverage the great power in representation learning on regular grid data such as image and video. Recently, increasing attention has been paid on generalizing CNNs to graph or network data which is highly irregular. Some focus on graph-level representation learning while others aim to learn node-level representations. These methods have been shown to boost the performance of many graph-level tasks such as graph classification and node-level tasks such as node classification. Most of these methods have been designed for single-dimensional graphs where a pair of nodes can only be connected by one type of relation. However, many real-world graphs have multiple types of relations and they can be naturally modeled as multi-dimensional graphs with each type of relation as a dimension. Multi-dimensional graphs bring about richer interactions between dimensions, which poses tremendous challenges to the graph convolutional neural networks designed for single-dimensional graphs. In this paper, we study the problem of graph convolutional networks for multi-dimensional graphs and propose a multi-dimensional convolutional neural network model mGCN aiming to capture rich information in learning node-level representations for multi-dimensional graphs. Comprehensive experiments on real-world multi-dimensional graphs demonstrate the effectiveness of the proposed framework. | Multi-dimensional Graph Convolutional Networks | 9,592 |
This study provides a methodological framework for the computer to classify tweets according to variables of the Theory of Planned Behavior. We present a sequential process of automated text analysis which combined supervised approach and unsupervised approach in order to make the computer to detect one of TPB variables in each tweet. We conducted Latent Dirichlet Allocation (LDA), Nearest Neighbor, and then assessed "typicality" of newly labeled tweets in order to predict classification boundary. Furthermore, this study reports findings from a content analysis of suicide-related tweets which identify traits of information environment in Twitter. Consistent with extant literature about suicide coverage, the findings demonstrate that tweets often contain information which prompt perceived behavior control of committing suicide, while rarely provided deterring information on suicide. We conclude by highlighting implications for methodological advances and empirical theory studies. | Theory-Driven Automated Content Analysis of Suicidal Tweets : Using
Typicality-Based Classification for LDA Dataset | 9,593 |
Targeted advertising is meant to improve the efficiency of matching advertisers to their customers. However, targeted advertising can also be abused by malicious advertisers to efficiently reach people susceptible to false stories, stoke grievances, and incite social conflict. Since targeted ads are not seen by non-targeted and non-vulnerable people, malicious ads are likely to go unreported and their effects undetected. This work examines a specific case of malicious advertising, exploring the extent to which political ads from the Russian Intelligence Research Agency (IRA) run prior to 2016 U.S. elections exploited Facebook's targeted advertising infrastructure to efficiently target ads on divisive or polarizing topics (e.g., immigration, race-based policing) at vulnerable sub-populations. In particular, we do the following: (a) We conduct U.S. census-representative surveys to characterize how users with different political ideologies report, approve, and perceive truth in the content of the IRA ads. Our surveys show that many ads are "divisive": they elicit very different reactions from people belonging to different socially salient groups. (b) We characterize how these divisive ads are targeted to sub-populations that feel particularly aggrieved by the status quo. Our findings support existing calls for greater transparency of content and targeting of political ads. (c) We particularly focus on how the Facebook ad API facilitates such targeting. We show how the enormous amount of personal data Facebook aggregates about users and makes available to advertisers enables such malicious targeting. | On Microtargeting Socially Divisive Ads: A Case Study of Russia-Linked
Ad Campaigns on Facebook | 9,594 |
This is an approach to detecting a subset of bots on Twitter, that at best is under-researched. This approach will be generic enough to be adaptable to most, if not all social networks. The subset of bots this focuses on are those that can evade most, if not all current detection methods. This is simply because they have little to no information associated with them that can be analyzed to make a determination. Although any account on any social media site inherently has information associated with it, it is very easy to blend in with the majority of users who are simply "lurkers" - those who only consume content, but do not contribute. How can you determine if an account is a bot if they don't do anything? By the time they act, it will be too late to detect. The only solution would be a real time, or near real-time, detection algorithm. | Botnet Campaign Detection on Twitter | 9,595 |
Today's social media platforms enable to spread both authentic and fake news very quickly. Some approaches have been proposed to automatically detect such "fake" news based on their content, but it is difficult to agree on universal criteria of authenticity (which can be bypassed by adversaries once known). Besides, it is obviously impossible to have each news item checked by a human. In this paper, we a mechanism to limit the spread of fake news which is not based on content. It can be implemented as a plugin on a social media platform. The principle is as follows: a team of fact-checkers reviews a small number of news items (the most popular ones), which enables to have an estimation of each user's inclination to share fake news items. Then, using a Bayesian approach, we estimate the trustworthiness of future news items, and treat accordingly those of them that pass a certain "untrustworthiness" threshold. We then evaluate the effectiveness and overhead of this technique on a large Twitter graph. We show that having a few thousands users exposed to one given news item enables to reach a very precise estimation of its reliability. We thus identify more than 99% of fake news items with no false positives. The performance impact is very small: the induced overhead on the 90th percentile latency is less than 3%, and less than 8% on the throughput of user operations. | Limiting the Spread of Fake News on Social Media Platforms by Evaluating
Users' Trustworthiness | 9,596 |
We study the problem of real time data capture on social media. Due to the different limitations imposed by those media, but also to the very large amount of information, it is impossible to collect all the data produced by social networks such as Twitter. Therefore, to be able to gather enough relevant information related to a predefined need, it is necessary to focus on a subset of the information sources. In this work, we focus on user-centered data capture and consider each account of a social network as a source that can be listened to at each iteration of a data capture process, in order to collect the corresponding produced contents. This process, whose aim is to maximize the quality of the information gathered, is constrained by the number of users that can be monitored simultaneously. The problem of selecting a subset of accounts to listen to over time is a sequential decision problem under constraints, which we formalize as a bandit problem with multiple selections. Therefore, we propose several bandit models to identify the most relevant users in real time. First, we study of the case of the stochastic bandit, in which each user corresponds to a stationary distribution. Then, we introduce two contextual bandit models, one stationary and the other non stationary, in which the utility of each user can be estimated by assuming some underlying structure in the reward space. The first approach introduces the notion of profile, which corresponds to the average behavior of a user. The second approach takes into account the activity of a user in order to predict his future behavior. Finally, we are interested in models that are able to tackle complex temporal dependencies between users, with the use of a latent space within which the information transits from one iteration to the other. Each of the proposed approaches is validated on both artificial and real datasets. | Bandit algorithms for real-time data capture on large social medias | 9,597 |
Social networks influence health-related behaviors, such as obesity and smoking. While researchers have studied social networks as a driver for diffusion of influences and behaviors, it is less understood how the structure or topology of the network, in itself, impacts an individual's health behaviors and wellness state. In this paper, we investigate whether the structure or topology of a social network offers additional insight and predictability on an individual's health and wellness. We develop a model called the Network-Driven health predictor (NetCARE) that leverages features representative of social network structure. Using a large longitudinal data set of students enrolled in the NetHealth study at the University of Notre Dame, we show that the NetCARE model improves the overall prediction performance over the baseline models -- that use demographics and physical attributes -- by 38%, 65%, 55%, and 54% for the wellness states -- stress, happiness, positive attitude, and self-assessed health -- considered in this paper. | Social Network Structure is Predictive of Health and Wellness | 9,598 |
In Twitter, there is a rising trend in abusive behavior which often leads to incivility. This trend is affecting users mentally and as a result they tend to leave Twitter and other such social networking sites thus depleting the active user base. In this paper, we study factors associated with incivility. We observe that the act of incivility is highly correlated with the opinion differences between the account holder (i.e., the user writing the incivil tweet) and the target (i.e., the user for whom the incivil tweet is meant for or targeted), toward a named entity. We introduce a character level CNN model and incorporate the entity-specific sentiment information for efficient incivility detection which significantly outperforms multiple baseline methods achieving an impressive accuracy of 93.3% (4.9% improvement over the best baseline). In a post-hoc analysis, we also study the behavioral aspects of the targets and account holders and try to understand the reasons behind the incivility incidents. Interestingly, we observe that there are strong signals of repetitions in incivil behavior. In particular, we find that there are a significant fraction of account holders who act as repeat offenders - attacking the targets even more than 10 times. Similarly, there are also targets who get targeted multiple times. In general, the targets are found to have higher reputation scores than the account holders. | Opinion Conflicts: An Effective Route to Detect Incivility in Twitter | 9,599 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.