text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
Professional sports constitute an important part of people's modern life. People spend substantial amounts of time and money supporting their favorite players and teams, and sometimes even riot after games. However, how team performance affects fan behavior remains understudied at a large scale. As almost every notable professional team has its own online fan community, these communities provide great opportunities for investigating this research question. In this work, we provide the first large-scale characterization of online fan communities of professional sports teams. Since user behavior in these online fan communities is inherently connected to game events and team performance, we construct a unique dataset that combines 1.5M posts and 43M comments in NBA-related communities on Reddit with statistics that document team performance in the NBA. We analyze the impact of team performance on fan behavior both at the game level and the season level. First, we study how team performance in a game relates to user activity during that game. We find that surprise plays an important role: the fans of the top teams are more active when their teams lose and so are the fans of the bottom teams in an unexpected win. Second, we study fan behavior over consecutive seasons and show that strong team performance is associated with fans of low loyalty, likely due to "bandwagon fans." Fans of the bottom teams tend to discuss their team's future such as young talents in the roster, which may help them stay optimistic during adversity. Our results not only contribute to understanding the interplay between online sports communities and offline context but also provide significant insights into sports management. | "This is why we play": Characterizing Online Fan Communities of the NBA
Teams | 9,600 |
Social media has become a popular means for people to consume news. Meanwhile, it also enables the wide dissemination of fake news, i.e., news with intentionally false information, which brings significant negative effects to the society. Thus, fake news detection is attracting increasing attention. However, fake news detection is a non-trivial task, which requires multi-source information such as news content, social context, and dynamic information. First, fake news is written to fool people, which makes it difficult to detect fake news simply based on news contents. In addition to news contents, we need to explore social contexts such as user engagements and social behaviors. For example, a credible user's comment that "this is a fake news" is a strong signal for detecting fake news. Second, dynamic information such as how fake news and true news propagate and how users' opinions toward news pieces are very important for extracting useful patterns for (early) fake news detection and intervention. Thus, comprehensive datasets which contain news content, social context, and dynamic information could facilitate fake news propagation, detection, and mitigation; while to the best of our knowledge, existing datasets only contains one or two aspects. Therefore, in this paper, to facilitate fake news related researches, we provide a fake news data repository FakeNewsNet, which contains two comprehensive datasets that includes news content, social context, and dynamic information. We present a comprehensive description of datasets collection, demonstrate an exploratory analysis of this data repository from different perspectives, and discuss the benefits of FakeNewsNet for potential applications on fake news study on social media. | FakeNewsNet: A Data Repository with News Content, Social Context and
Spatialtemporal Information for Studying Fake News on Social Media | 9,601 |
Information diffusion in social networks facilitates rapid and large-scale propagation of content. However, spontaneous diffusion behavior could also lead to the cascading of sensitive information, which is neglected in prior arts. In this paper, we present the first look into adaptive diffusion of sensitive information, which we aim to prevent from widely spreading without incurring much information loss. We undertake the investigation in networks with partially known topology, meaning that some users' ability of forwarding information is unknown. Formulating the problem into a bandit model, we propose BLAG (Bandit on Large Action set Graph), which adaptively diffuses sensitive information towards users with weak forwarding ability that is learnt from tentative transmissions and corresponding feedbacks. BLAG enjoys a low complexity of O(n), and is provably more efficient in the sense of half regret bound compared with prior learning method. Experiments on synthetic and three real datasets further demonstrate the superiority of BLAG in terms of adaptive diffusion of sensitive information over several baselines, with at least 40 percent less information loss, at least 10 times of learning efficiency given limited learning rounds and significantly postponed cascading of sensitive information. | BLAG: Bandit On Large Action Set Graph | 9,602 |
Recommender systems play a significant role in providing the appropriate data for each user among a huge amount of information. One of the important roles of a recommender system is to predict the preference of each user to some specific data. Some of these systems concentrate on user-item networks that each user rates some items. The main step for item recommendation is to predict the rate of unrated items. Each recommender system utilizes different criteria such as the similarity between users or social relations in the process of rate prediction. As social connections of each user affect his behaviors, it can be a valuable source to use in rate prediction. In this paper, we will provide a new social recommender system which uses Bhattacharyya coefficient in similarity computing to be able to evaluate similarity in sparse data and between users without co-rated items as well as integrating social ties into the rating prediction process. | A Social Recommender System based on Bhattacharyya Coefficient | 9,603 |
For many important network types (e.g., sensor networks in complex harsh environments and social networks) physical coordinate systems (e.g., Cartesian), and physical distances (e.g., Euclidean), are either difficult to discern or inapplicable. Accordingly, coordinate systems and characterizations based on hop-distance measurements, such as Topology Preserving Maps (TPMs) and Virtual-Coordinate (VC) systems are attractive alternatives to Cartesian coordinates for many network algorithms. Herein, we present an approach to recover geometric and topological properties of a network with a small set of distance measurements. In particular, our approach is a combination of shortest path (often called geodesic) recovery concepts and low-rank matrix completion, generalized to the case of hop-distances in graphs. Results for sensor networks embedded in 2-D and 3-D spaces, as well as a social networks, indicates that the method can accurately capture the network connectivity with a small set of measurements. TPM generation can now also be based on various context appropriate measurements or VC systems, as long as they characterize different nodes by distances to small sets of random nodes (instead of a set of global anchors). The proposed method is a significant generalization that allows the topology to be extracted from a random set of graph shortest paths, making it applicable in contexts such as social networks where VC generation may not be possible. | Network Topology Mapping from Partial Virtual Coordinates and Graph
Geodesics | 9,604 |
In textual conversation threads, as found on many popular social media platforms, each particular user text comment either originates a new thread of discussion, or replies to a previous comment. An individual who makes an original comment ---termed as the "root source''---is a topic initiator or even an information source, and identifying such individuals is of particular interest. The reply structure of comments is not always available (e.g. in the proliferation of a news event), and thus identifying root sources is a nontrivial task. In this paper, we develop a generative model based on marked multivariate Hawkes processes, and introduce a novel concept, "root source probability", to quantify the uncertainty in attributing possible root sources to each comment. A dynamic-programming-based algorithm is then derived to efficiently compute root source probabilities. Experiments on synthetic and real-world data show that our method identifies root sources that match ground truth and human intuition. | Who Started It? Identifying Root Sources in Textual Conversation Threads | 9,605 |
Trust among the users of a social network plays a pivotal role in item recommendation, particularly for the cold start users. Due to the sparse nature of these networks, trust information between any two users may not be always available. To infer the missing trust values, one well-known approach is path based trust estimation, which suggests a user to believe all of its neighbors in the network. In this context, we propose two threshold-based heuristics to overcome the limitation of computation for the path based trust inference. It uses the propagation phenomena of trust and decides a threshold value to select a subset of users for trust propagation. While the first heuristic creates the inferred network considering only the subset of users, the second one is able to preserve the density of the inferred network coming from all users selection. We implement the heuristics and analyze the inferred networks with two real-world datasets. We observe that the proposed threshold based heuristic can recover up to 70 \% of the paths with much less time compared to its deterministic counterpart. We also show that the heuristic based inferred trust is capable of preserving the recommendation accuracy. | Threshold-Based Heuristics for Trust Inference in a Social Network | 9,606 |
Nowadays, online video platforms mostly recommend related videos by analyzing user-driven data such as viewing patterns, rather than the content of the videos. However, content is more important than any other element when videos aim to deliver knowledge. Therefore, we have developed a web application which recommends related TED lecture videos to the users, considering the content of the videos from the transcripts. TED Talk Recommender constructs a network for recommending videos that are similar content-wise and providing a user interface. | TED Talk Recommender Using Speech Transcripts | 9,607 |
Facebook News Feed personalization algorithm has a significant impact, on a daily basis, on the lifestyle, mood and opinion of millions of Internet users. Nonetheless, the behavior of such algorithm lacks transparency, motivating measurements, modeling and analysis in order to understand and improve its properties. In this paper, we propose a reproducible methodology encompassing measurements, an analytical model and a fairness-based News Feed design. The model leverages the versatility and analytical tractability of time-to-live (TTL) counters to capture the visibility and occupancy of publishers over a News Feed. Measurements are used to parameterize and to validate the expressive power of the proposed model. Then, we conduct a what-if analysis to assess the visibility and occupancy bias incurred by users against a baseline derived from the model. Our results indicate that a significant bias exists and it is more prominent at the top position of the News Feed. In addition, we find that the bias is non-negligible even for users that are deliberately set as neutral with respect to their political views, motivating the proposal of a novel and more transparent fairness-based News Feed design. | Fairness in Online Social Network Timelines: Measurements, Models and
Mechanism Design | 9,608 |
We consider a network where an infection cascade has taken place and a subset of infected nodes has been partially observed. Our goal is to reconstruct the underlying cascade that is likely to have generated these observations. We reduce this cascade-reconstruction problem to computing the marginal probability that a node is infected given the partial observations, which is a #P-hard problem. To circumvent this issue, we resort to estimating infection probabilities by generating a sample of probable cascades, which span the nodes that have already been observed to be infected, and avoid the nodes that have been observed to be uninfected. The sampling problem corresponds to sampling directed Steiner trees with a given set of terminals, which is a problem of independent interest and has received limited attention in the literature. For the latter problem we propose two novel algorithms with provable guarantees on the sampling distribution of the returned Steiner trees. The resulting method improves over state-of-the-art approaches that often make explicit assumptions about the infection-propagation model, or require additional parameters. Our method provides a more robust approach to the cascadereconstruction problem, which makes weaker assumptions about the infection model, requires fewer additional parameters, and can be used to estimate node infection probabilities. Empirically, we validate the proposed reconstruction algorithm on real-world graphs with both synthetic and real cascades. We show that our method outperforms all other baseline strategies in most cases | Robust Cascade Reconstruction by Steiner Tree Sampling | 9,609 |
The rising popularity of social media has radically changed the way news content is propagated, including interactive attempts with new dimensions. To date, traditional news media such as newspapers, television and radio have already adapted their activities to the online news media by utilizing social media, blogs, websites etc. This paper provides some insight into the social media presence of worldwide popular news media outlets. Despite the fact that these large news media propagate content via social media environments to a large extent and very little is known about the news item producers, providers and consumers in the news media community in social media.To better understand these interactions, this work aims to analyze news items in two large social media, Twitter and Facebook. Towards that end, we collected all published posts on Twitter and Facebook from 48 news media to perform descriptive and predictive analyses using the dataset of 152K tweets and 80K Facebook posts. We explored a set of news media that originate content by themselves in social media, those who distribute their news items to other news media and those who consume news content from other news media and/or share replicas. We propose a predictive model to increase news media popularity among readers based on the number of posts, number of followers and number of interactions performed within the news media community. The results manifested that, news media should disperse their own content and they should publish first in social media in order to become a popular news media and receive more attractions to their news items from news readers. | Inspecting Interactions: Online News Media Synergies in Social Media | 9,610 |
Widespread usage of complex interconnected social networks such as Facebook, Twitter and LinkedIn in modern internet era has also unfortunately opened the door for privacy violation of users of such networks by malicious entities. In this article we investigate, both theoretically and empirically, privacy violation measures of large networks under active attacks that was recently introduced in (Information Sciences, 328, 403-417, 2016). Our theoretical result indicates that the network manager responsible for prevention of privacy violation must be very careful in designing the network if its topology does not contain a cycle. Our empirical results shed light on privacy violation properties of eight real social networks as well as a large number of synthetic networks generated by both the classical Erdos-Renyi model and the scale-free random networks generated by the Barabasi-Albert preferential-attachment model. | On analyzing and evaluating privacy measures for social networks under
active attack | 9,611 |
Inspired by the simple, yet effective, method of tweeting gibberish to attract automated social agents (bots), we attempt to create localised honeypots in the South African political context. We produce a series of defined techniques and combine them to generate interactions from users on Twitter. The paper offers two key contributions. Conceptually, an argument is made that honeypots should not be confused for bot detection methods, but are rather methods to capture low-quality users. Secondly, we successfully generate a list of 288 local low quality users active in the political context. | Deploying South African Social Honeypots on Twitter | 9,612 |
The widespread online misinformation could cause public panic and serious economic damages. The misinformation containment problem aims at limiting the spread of misinformation in online social networks by launching competing campaigns. Motivated by realistic scenarios, we present the first analysis of the misinformation containment problem for the case when an arbitrary number of cascades are allowed. This paper makes four contributions. First, we provide a formal model for multi-cascade diffusion and introduce an important concept called as cascade priority. Second, we show that the misinformation containment problem cannot be approximated within a factor of $\Omega(2^{\log^{1-\epsilon}n^4})$ in polynomial time unless $NP \subseteq DTIME(n^{\polylog{n}})$. Third, we introduce several types of cascade priority that are frequently seen in real social networks. Finally, we design novel algorithms for solving the misinformation containment problem. The effectiveness of the proposed algorithm is supported by encouraging experimental results. | On Misinformation Containment in Online Social Networks | 9,613 |
A book's success/popularity depends on various parameters - extrinsic and intrinsic. In this paper, we study how the book reading characteristics might influence the popularity of a book. Towards this objective, we perform a cross-platform study of Goodreads entities and attempt to establish the connection between various Goodreads entities and the popular books ("Amazon best sellers"). We analyze the collective reading behavior on Goodreads platform and quantify various characteristic features of the Goodreads entities to identify differences between these Amazon best sellers (ABS) and the other non-best selling books. We then develop a prediction model using the characteristic features to predict if a book shall become a best seller after one month (15 days) since its publication. On a balanced set, we are able to achieve a very high average accuracy of 88.72% (85.66%) for the prediction where the other competitive class contains books which are randomly selected from the Goodreads dataset. Our method primarily based on features derived from user posts and genre related characteristic properties achieves an improvement of 16.4% over the traditional popularity factors (ratings, reviews) based baseline methods. We also evaluate our model with two more competitive set of books a) that are both highly rated and have received a large number of reviews (but are not best sellers) (HRHR) and b) Goodreads Choice Awards Nominated books which are non-best sellers (GCAN). We are able to achieve quite good results with very high average accuracy of 87.1% and as well a high ROC for ABS vs GCAN. For ABS vs HRHR, our model yields a high average accuracy of 86.22%. | Analyzing Social Book Reading Behavior on Goodreads and how it predicts
Amazon Best Sellers | 9,614 |
Analysing and explaining relationships between entities in a graph is a fundamental problem associated with many practical applications. For example, a graph of biological pathways can be used for discovering a previously unknown relationship between two proteins. Domain experts, however, may be reluctant to trust such a discovery without a detailed explanation as to why exactly the two proteins are deemed related in the graph. This paper provides an overview of the types of solutions, their associated methods and strategies, that have been proposed for finding entity relatedness explanations in graphs. The first type of solution relies on information inherent to the paths connecting the entities. This type of solution provides entity relatedness explanations in the form of a list of ranked paths. The rank of a path is measured in terms of importance, uniqueness, novelty and informativeness. The second type of solution relies on measures of node relevance. In this case, the relevance of nodes is measured w.r.t. the entities of interest, and relatedness explanations are provided in the form of a subgraph that maximises node relevance scores. This paper uses this classification of approaches to discuss and contrast some of the key concepts that guide different solutions to the problem of entity relatedness explanation in graphs. | Finding Explanations of Entity Relatedness in Graphs: A Survey | 9,615 |
This paper explores how to analyze empirically a network of website visitors from several countries in the world. While exploring this huge network of website visitors worldwide, this paper shows an empirical data analysis with a visualization of how data has been analyzed and interpreted. By evaluating the methods used in analyzing and interpreting these data, this paper provides the required knowledge to empirically analyze a set of various obtained data from website visitors with different browsers and IP-addresses. Keywords: Website Data Analysis, Website Communities, Visualization | The Empirical Network Analysis of Website Visitors | 9,616 |
Extensive research on social media usage during emergencies has shown its value to provide life-saving information, if a mechanism is in place to filter and prioritize messages. Existing ranking systems can provide a baseline for selecting which updates or alerts to push to emergency responders. However, prior research has not investigated in depth how many and how often should these updates be generated, considering a given bound on the workload for a user due to the limited budget of attention in this stressful work environment. This paper presents a novel problem and a model to quantify the relationship between the performance metrics of ranking systems (e.g., recall, NDCG) and the bounds on the user workload. We then synthesize an alert-based ranking system that enforces these bounds to avoid overwhelming end-users. We propose a Pareto optimal algorithm for ranking selection that adaptively determines the preference of top-k ranking and user workload over time. We demonstrate the applicability of this approach for Emergency Operation Centers (EOCs) by performing an evaluation based on real world data from six crisis events. We analyze the trade-off between recall and workload recommendation across periodic and realtime settings. Our experiments demonstrate that the proposed ranking selection approach can improve the efficiency of monitoring social media requests while optimizing the need for user attention. | Ranking of Social Media Alerts with Workload Bounds in Emergency
Operation Centers | 9,617 |
State-of-the-art in network science of teams offers effective recommendation methods to answer questions like who is the best replacement, what is the best team expansion strategy, but lacks intuitive ways to explain why the optimization algorithm gives the specific recommendation for a given team optimization scenario. To tackle this problem, we develop an interactive prototype system, EXTRA, as the first step towards addressing such a sense-making challenge, through the lens of the underlying network where teams embed, to explain the team recommendation results. The main advantages are (1) Algorithm efficacy: we propose an effective and fast algorithm to explain random walk graph kernel, the central technique for networked team recommendation; (2) Intuitive visual explanation: we present intuitive visual analysis of the recommendation results, which can help users better understand the rationality of the underlying team recommendation algorithm. | EXTRA: Explaining Team Recommendation in Networks | 9,618 |
Community detection in social networks is widely studied because of its importance in uncovering how people connect and interact. However, little attention has been given to community structure in Facebook public pages. In this study, we investigate the community detection problem in Facebook newsgroup pages. In particular, to deal with the diversity of user activities, we apply multi-view clustering to integrate different views, for example, likes on posts and likes on comments. In this study, we explore the community structure in not only a given single page but across multiple pages. The results show that our method can effectively reduce isolates and improve the quality of community structure. | Multi-View Community Detection in Facebook Public Pages | 9,619 |
With a growing number of social apps, people have become increasingly willing to share their everyday photos and events on social media platforms, such as Facebook, Instagram, and WeChat. In social media data mining, post popularity prediction has received much attention from both data scientists and psychologists. Existing research focuses more on exploring the post popularity on a population of users and including comprehensive factors such as temporal information, user connections, number of comments, and so on. However, these frameworks are not suitable for guiding a specific user to make a popular post because the attributes of this user are fixed. Therefore, previous frameworks can only answer the question "whether a post is popular" rather than "how to become famous by popular posts". In this paper, we aim at predicting the popularity of a post for a specific user and mining the patterns behind the popularity. To this end, we first collect data from Instagram. We then design a method to figure out the user environment, representing the content that a specific user is very likely to post. Based on the relevant data, we devise a novel dual-attention model to incorporate image, caption, and user environment. The dual-attention model basically consists of two parts, explicit attention for image-caption pairs and implicit attention for user environment. A hierarchical structure is devised to concatenate the explicit attention part and implicit attention part. We conduct a series of experiments to validate the effectiveness of our model and investigate the factors that can influence the popularity. The classification results show that our model outperforms the baselines, and a statistical analysis identifies what kind of pictures or captions can help the user achieve a relatively high "likes" number. | How to Become Instagram Famous: Post Popularity Prediction with
Dual-Attention | 9,620 |
In modern election campaigns, political parties utilize social media to advertise their policies and candidates and to communicate to electorates. In Japan's latest general election in 2017, the 48th general election for the Lower House, social media, especially Twitter, was actively used. In this paper, we perform a detailed analysis of social graphs and users who retweeted tweets of political parties during the election. Our aim is to obtain accurate information regarding the diffusion power for each party rather than just the number of followers. The results indicate that a user following a user who follows a political party account tended to also follow the account. This means that it does not increase diversity because users who follow each other tend to share similar values. We also find that followers of a specific party frequently retweeted the tweets. However, since users following the user who follow a political party account are not diverse, political parties delivered the information only to a few political detachment users. | Information Diffusion Power of Political Party Twitter Accounts During
Japan's 2017 Election | 9,621 |
One of the key aspects of the United States democracy is free and fair elections that allow for a peaceful transfer of power from one President to the next. The 2016 US presidential election stands out due to suspected foreign influence before, during, and after the election. A significant portion of that suspected influence was carried out via social media. In this paper, we look specifically at 3,500 Facebook ads allegedly purchased by the Russian government. These ads were released on May 10, 2018 by the US Congress House Intelligence Committee. We analyzed the ads using natural language processing techniques to determine textual and semantic features associated with the most effective ones. We clustered the ads over time into the various campaigns and the labeled parties associated with them. We also studied the effectiveness of Ads on an individual, campaign and party basis. The most effective ads tend to have less positive sentiment, focus on past events and are more specific and personalized in nature. The more effective campaigns also show such similar characteristics. The campaigns' duration and promotion of the Ads suggest a desire to sow division rather than sway the election. | 'Senator, We Sell Ads': Analysis of the 2016 Russian Facebook Ads
Campaign | 9,622 |
Fanfiction.net provides an informal learning space for young writers through distributed mentoring, networked giving and receiving of feedback. In this paper, we quantify the cumulative effect of feedback on lexical diversity for 1.5 million authors. | Reviews Matter: How Distributed Mentoring Predicts Lexical Diversity on
Fanfiction.net | 9,623 |
In a world where ideas flow freely between people across multiple platforms, we often find ourselves relying on others' information without an objective standard to judge whether those opinions are accurate. The present study tests an agreement-in-confidence hypothesis of advice perception, which holds that internal metacognitive evaluations of decision confidence play an important functional role in the perception and use of social information, such as peers' advice. We propose that confidence can be used, computationally, to estimate advisors' trustworthiness and advice reliability. Specifically, these processes are hypothesized to be particularly important in situations where objective feedback is absent or difficult to acquire. Here, we use a judge-advisor system paradigm to precisely manipulate the profiles of virtual advisors whose opinions are provided to participants performing a perceptual decision making task. We find that when advisors' and participants' judgments are independent, people are able to discriminate subtle advice features, like confidence calibration, whether or not objective feedback is available. However, when observers' judgments (and judgment errors) are correlated - as is the case in many social contexts - predictable distortions can be observed between feedback and feedback-free scenarios. A simple model of advice reliability estimation, endowed with metacognitive insight, is able to explain key patterns of results observed in the human data. We use agent-based modeling to explore implications of these individual-level decision strategies for network-level patterns of trust and belief formation. | The role of decision confidence in advice-taking and trust formation | 9,624 |
Growing popularity of social networks demands a highly efficient Personalized PageRank (PPR) updating due to the fast-evolving web graphs of enormous size. While current researches are focusing on PPR updating under link structure modification, efficiently updating PPR when node insertion/ deletion involved remains a challenge. In the previous work called Virtual Web (VW), a few VW architectures are designed, which results in some highly effective initializations to significantly accelerate PageRank updating under both link modification and page insertion/deletion. In the paper, under the general scenario of link modification and node insertion/deletion we tackle the fast PPR updating problem. Specifically, we combine VW with the TrackingPPR method to generate initials, which are then used by the Gauss-Southwell method for fast PPR updating. The algorithm is named VWPPR method. In extensive experiments, three real-world datasets are used that contain 1~5.6M nodes and 6.7M~129M links, while a node perturbation of 40k and link perturbation of 1% are applied. Comparing to the more recent LazyForwardUpdate method, which handles the general PPR updating problem, the VWPPR method is 3~6 times faster in terms of running time, or 4.4~10 times faster in terms of iteration numbers. | Virtual Web Based Personalized PageRank Updating | 9,625 |
Preventing fake or duplicate digital identities (aka sybils) from joining a digital community may be crucial to its survival, especially if it utilizes a consensus protocol among its members or employs democratic governance, where sybils can undermine consensus, tilt decisions, or even take over. Here, we explore the use of a trust-graph of identities, with edges representing trust among identity owners, to allow a community to grow indefinitely without increasing its sybil penetration. Since identities are admitted to the digital community based on their trust by existing digital community members, corrupt identities, which may trust sybils, also pose a threat to the digital community. Sybils and their corrupt perpetrators are together referred to as byzantines, and the overarching aim is to limit their penetration into a digital community. We propose two alternative tools to achieve this goal. One is graph conductance, which works under the assumption that honest people are averse to corrupt ones and tend to distrust them. The second is vertex expansion, which relies on the assumption that there are not too many corrupt identities in the community. Of particular interest is keeping the fraction of byzantines below one third, as it would allow the use of Byzantine Agreement [15] for consensus as well as for sybil-resilient social choice [19]. This paper considers incrementally growing a trust graph and shows that, under its key assumptions and additional requirements, including keeping the conductance or vertex expansion of the community trust graph sufficiently high, a community may grow safely, indefinitely. | Building a Sybil-Resilient Digital Community Utilizing Trust-Graph
Connectivity | 9,626 |
Recently, online social networks have become major battlegrounds for political campaigns, viral marketing, and the dissemination of news. As a consequence, ''bad actors'' are increasingly exploiting these platforms, becoming a key challenge for their administrators, businesses and the society in general. The spread of fake news is a classical example of the abuse of social networks by these actors. While some have advocated for stricter policies to control the spread of misinformation in social networks, this often happens in detriment of their democratic and organic structure. In this paper we study how to limit the influence of a target set of users in a network via the removal of a few edges. The idea is to control the diffusion processes while minimizing the amount of disturbance in the network structure. We formulate the influence limitation problem in a data-driven fashion, by taking into account past propagation traces. Moreover, we consider two types of constraints over the set of edge removals, a budget constraint and also a, more general, set of matroid constraints. These problems lead to interesting challenges in terms of algorithm design. For instance, we are able to show that influence limitation is APX-hard and propose deterministic and probabilistic approximation algorithms for the budgeted and matroid version of the problem, respectively. Our experiments show that the proposed solutions outperform the baselines by up to 40%. | Influence Minimization Under Budget and Matroid Constraints: Extended
Version | 9,627 |
Link prediction in complex networks has attracted considerable attention from interdisciplinary research communities, due to its ubiquitous applications in biological networks, social networks, transportation networks, telecommunication networks, and, recently, knowledge graphs. Numerous studies utilized link prediction approaches in order sto find missing links or predict the likelihood of future links as well as employed for reconstruction networks, recommender systems, privacy control, etc. This work presents an extensive review of state-of-art methods and algorithms proposed on this subject and categorizes them into four main categories: similarity-based methods, probabilistic methods, relational models, and learning-based methods. Additionally, a collection of network data sets has been presented in this paper, which can be used in order to study link prediction. We conclude this study with a discussion of recent developments and future research directions. | Review on Learning and Extracting Graph Features for Link Prediction | 9,628 |
Many real-world networks can be modeled by networks of interacting agents. Analysis of these interactions can reveal fundamental properties from these networks. Estimating the amount of collaboration in a network corresponding to connections in a learning environment can reveal to what extent learners share their experience and knowledge with other learners. Alternatively, analyzing the network of interactions in an open source software project can manifest indicators showing the efficiency of collaborations. One central problem in such domains is the low cooperativity values of networks due to the low cooperativity values of their respective communities. So administrators should not only understand and predict the cooperativity of networks but also they need to evaluate their respective community structures. To approach this issue, in this paper, we address two domains of open source software projects and learning forums. As such, we calculate the amount of cooperativity in the corresponding networks and communities of these domains by applying several community detection algorithms. Moreover, we investigated the community properties and identified the significant properties for estimating the network and community cooperativity. Correspondingly, we identified to what extent various community detection algorithms affect the identification of significant properties and prediction of cooperativity. We also fabricated binary and regression prediction models using the community properties. Our results and constructed models can be used to infer cooperativity of community structures from their respective properties. When predicting high defective structures in networks, administrators can look for useful drives to increase the collaborations. | Investigating Cooperativity of Overlapping Community Structures in
Social Networks | 9,629 |
People participate and activate in online social networks and thus tremendous amount of network data is generated; data regarding their interactions, interests and activities. Some people search for specific questions through online social platforms such as forums and they may receive a suitable response via experts. To categorize people as experts and to evaluate their willingness to cooperate, one can use ranking and cooperation problems from complex networks. In this paper, we investigate classical ranking algorithms besides the prisoner dilemma game to simulate cooperation and defection of agents. We compute the correlation among the node rank and node cooperativity via three strategies. The first strategy is involved in node level; however, other strategies are calculated regarding neighborhood of nodes. We find out correlations among specific ranking algorithms and cooperativtiy of nodes. Our observations may be applied to estimate the propensity of people (experts) to cooperate in future based on their ranking values. | Ranking and Cooperation in Real-World Complex Networks | 9,630 |
User identity linkage (UIL), the problem of matching user account across multiple online social networks (OSNs), is widely studied and important to many real-world applications. Most existing UIL solutions adopt a supervised or semi-supervised approach which generally suffer from scarcity of labeled data. In this paper, we propose Factoid Embedding, a novel framework that adopts an unsupervised approach. It is designed to cope with different profile attributes, content types and network links of different OSNs. The key idea is that each piece of information about a user identity describes the real identity owner, and thus distinguishes the owner from other users. We represent such a piece of information by a factoid and model it as a triplet consisting of user identity, predicate, and an object or another user identity. By embedding these factoids, we learn the user identity latent representations and link two user identities from different OSNs if they are close to each other in the user embedding space. Our Factoid Embedding algorithm is designed such that as we learn the embedding space, each embedded factoid is "translated" into a motion in the user embedding space to bring similar user identities closer, and different user identities further apart. Extensive experiments are conducted to evaluate Factoid Embedding on two real-world OSNs data sets. The experiment results show that Factoid Embedding outperforms the state-of-the-art methods even without training data. | Unsupervised User Identity Linkage via Factoid Embedding | 9,631 |
Geo-social data has been an attractive source for a variety of problems such as mining mobility patterns, link prediction, location recommendation, and influence maximization. However, new geo-social data is increasingly unavailable and suffers several limitations. In this paper, we aim to remedy the problem of effective data extraction from geo-social data sources. We first identify and categorize the limitations of extracting geo-social data. In order to overcome the limitations, we propose a novel seed-driven approach that uses the points of one source as the seed to feed as queries for the others. We additionally handle differences between, and dynamics within the sources by proposing three variants for optimizing search radius. Furthermore, we provide an optimization based on recursive clustering to minimize the number of requests and an adaptive procedure to learn the specific data distribution of each source. Our comprehensive experiments with six popular sources show that our seed-driven approach yields 14.3 times more data overall, while our request-optimized algorithm retrieves up to 95% of the data with less than 16% of the requests. Thus, our proposed seed-driven approach set new standards for effective and efficient extraction of geo-social data. | Seed-Driven Geo-Social Data Extraction -- Full Version | 9,632 |
Network embedding is a highly effective method to learn low-dimensional node vector representations with original network structures being well preserved. However, existing network embedding algorithms are mostly developed for a single network, which fail to learn generalized feature representations across different networks. In this paper, we study a cross-network node classification problem, which aims at leveraging the abundant labeled information from a source network to help classify the unlabeled nodes in a target network. To succeed in such a task, transferable features should be learned for nodes across different networks. To this end, a novel cross-network deep network embedding (CDNE) model is proposed to incorporate domain adaptation into deep network embedding so as to learn label-discriminative and network-invariant node vector representations. On one hand, CDNE leverages network structures to capture the proximities between nodes within a network, by mapping more strongly connected nodes to have more similar latent vector representations. On the other hand, node attributes and labels are leveraged to capture the proximities between nodes across different networks by making the same labeled nodes across networks have aligned latent vector representations. Extensive experiments have been conducted, demonstrating that the proposed CDNE model significantly outperforms the state-of-the-art network embedding algorithms in cross-network node classification. | Network Together: Node Classification via Cross network Deep Network
Embedding | 9,633 |
In recent years, the phenomenon of online misinformation and junk news circulating on social media has come to constitute an important and widespread problem affecting public life online across the globe, particularly around important political events such as elections. At the same time, there have been calls for more transparency around misinformation on social media platforms, as many of the most popular social media platforms function as "walled gardens," where it is impossible for researchers and the public to readily examine the scale and nature of misinformation activity as it unfolds on the platforms. In order to help address this, we present the Junk News Aggregator, a publicly available interactive web tool, which allows anyone to examine, in near real-time, all of the public content posted to Facebook by important junk news sources in the US. It allows the public to gain access to and examine the latest articles posted on Facebook (the most popular social media platform in the US and one where content is not readily accessible at scale from the open Web), as well as organise them by time, news publisher, and keywords of interest, and sort them based on all eight engagement metrics available on Facebook. Therefore, the Aggregator allows the public to gain insights on the volume, content, key themes, and types and volumes of engagement received by content posted by junk news publishers, in near real-time, hence opening up and offering transparency in these activities as they unfold, at scale across the top most popular junk news publishers. In this way, the Aggregator can help increase transparency around the nature, volume, and engagement with junk news on social media, and serve as a media literacy tool for the public. | The Junk News Aggregator: Examining junk news posted on Facebook,
starting with the 2018 US Midterm Elections | 9,634 |
Mobility entropy is proposed to measure predictability of human movements, based on which, the upper and lower bound of prediction accuracy is deduced, but corresponding mathematical expressions of prediction accuracy keeps yet open. In this work, we try to analyze and model prediction accuracy in terms of entropy based on the 2-order Markov chain model empirical results on a large scale CDR data set, which demonstrates the observation that users with the same level of entropy achieve different levels of accuracy\cite{Empirical}. After dividing entropy into intervals, we fit the probability density distributions of accuracy in each entropy interval with Gaussian distribution and then we estimate the corresponding mean and standard deviation of these distributions. After observing that the parameters vary with increasing entropy, we then model the relationship between parameters and entropy using least squares method. The mean can be modelled as a linear function, while the standard deviation can be modelled as a Gaussian distribution. Thus based on the above analysis, the probability density function of accuracy given entropy can be expressed by functional Gaussian distribution. The insights from our work is the first step to model the correlation prediction accuracy and predictability entropy, thus shed light on the further work in this direction. | Functional Gaussian Distribution Modelling of Mobility Prediction
Accuracy for Wireless Users | 9,635 |
In this paper we present EvalNE, a Python toolbox for evaluating network embedding methods on link prediction tasks. Link prediction is one of the most popular choices for evaluating the quality of network embeddings. However, the complexity of this task requires a carefully designed evaluation pipeline in order to provide consistent, reproducible and comparable results. EvalNE simplifies this process by providing automation and abstraction of tasks such as hyper-parameter tuning and model validation, edge sampling and negative edge sampling, computation of edge embeddings from node embeddings, and evaluation metrics. The toolbox allows for the evaluation of any off-the-shelf embedding method without the need to write extra code. Moreover, it can also be used for evaluating any other link prediction method, and integrates several link prediction heuristics as baselines. | EvalNE: A Framework for Evaluating Network Embeddings on Link Prediction | 9,636 |
User mobility prediction is widely considered to be helpful for various sorts of location based services on mobile devices. A large amount of studies have explored different algorithms to predict where a user will visit in the future based on their current and historical contexts and trajectories. Most of them focus on specific targets of predictions, such as the next venue a user checks in or the destination of her next trip, which usually depend on what their task is and what is available in their data. While successful stories are often reported, little discussion can be found on what happens if the prediction targets vary: whether coarser locations are easier to be predicted than finer locations, and whether predicting the immediate next location on the trajectory is easier than predicting the destination. On the other hand, commonly used in these prediction tasks, few have utilized finer grained, on-device user behavioral data, which are supposed to be indicative of user intentions. In this paper, we conduct a systematic study on the problem of mobility prediction using a fine-grained real-world dataset. Based on a Markov model, a recurrent neural network, and a multi-modal learning method, we perform a series of experiments to investigate the predictability of different types of granularities of prediction targets and the effectiveness of different types of signals. The results provide many insights on what can be predicted and how, which sheds light on real-world mobility prediction in general. | A Systematic Analysis of Fine-Grained Human Mobility Prediction with
On-Device Contextual Data | 9,637 |
One of the hallmarks of a free and fair society is the ability to conduct a peaceful and seamless transfer of power from one leader to another. Democratically, this is measured in a citizen population's trust in the electoral system of choosing a representative government. In view of the well documented issues of the 2016 US Presidential election, we conducted an in-depth analysis of the 2018 US Midterm elections looking specifically for voter fraud or suppression. The Midterm election occurs in the middle of a 4 year presidential term. For the 2018 midterms, 35 senators and all the 435 seats in the House of Representatives were up for re-election, thus, every congressional district and practically every state had a federal election. In order to collect election related tweets, we analyzed Twitter during the month prior to, and the two weeks following, the November 6, 2018 election day. In a targeted analysis to detect statistical anomalies or election interference, we identified several biases that can lead to wrong conclusions. Specifically, we looked for divergence between actual voting outcomes and instances of the #ivoted hashtag on the election day. This analysis highlighted three states of concern: New York, California, and Texas. We repeated our analysis discarding malicious accounts, such as social bots. Upon further inspection and against a backdrop of collected general election-related tweets, we identified some confounding factors, such as population bias, or bot and political ideology inference, that can lead to false conclusions. We conclude by providing an in-depth discussion of the perils and challenges of using social media data to explore questions about election manipulation. | Perils and Challenges of Social Media and Election Manipulation
Analysis: The 2018 US Midterms | 9,638 |
Substance use and abuse is a significant public health problem in the United States. Group-based intervention programs offer a promising means of preventing and reducing substance abuse. While effective, unfortunately, inappropriate intervention groups can result in an increase in deviant behaviors among participants, a process known as deviancy training. This paper investigates the problem of optimizing the social influence related to the deviant behavior via careful construction of the intervention groups. We propose a Mixed Integer Optimization formulation that decides on the intervention groups, captures the impact of the groups on the structure of the social network, and models the impact of these changes on behavior propagation. In addition, we propose a scalable hybrid meta-heuristic algorithm that combines Mixed Integer Programming and Large Neighborhood Search to find near-optimal network partitions. Our algorithm is packaged in the form of GUIDE, an AI-based decision aid that recommends intervention groups. Being the first quantitative decision aid of this kind, GUIDE is able to assist practitioners, in particular social workers, in three key areas: (a) GUIDE proposes near-optimal solutions that are shown, via extensive simulations, to significantly improve over the traditional qualitative practices for forming intervention groups; (b) GUIDE is able to identify circumstances when an intervention will lead to deviancy training, thus saving time, money, and effort; (c) GUIDE can evaluate current strategies of group formation and discard strategies that will lead to deviancy training. In developing GUIDE, we are primarily interested in substance use interventions among homeless youth as a high risk and vulnerable population. GUIDE is developed in collaboration with Urban Peak, a homeless-youth serving organization in Denver, CO, and is under preparation for deployment. | Social Network Based Substance Abuse Prevention via Network Modification
(A Preliminary Study) | 9,639 |
Influence maximization is a prototypical problem enabling applications in various domains, and it has been extensively studied in the past decade. The classic influence maximization problem explores the strategies for deploying seed users before the start of the diffusion process such that the total influence can be maximized. In its adaptive version, seed nodes are allowed to be launched in an adaptive manner after observing certain diffusion results. In this paper, we provide a systematic study on the adaptive influence maximization problem, focusing on the algorithmic analysis of the scenarios when it is not adaptive submodular. We introduce the concept of regret ratio which characterizes the key trade-off in designing adaptive seeding strategies, based on which we present the approximation analysis for the well-known greedy policy. In addition, we provide analysis concerning improving the efficiencies and bounding the regret ratio. Finally, we propose several future research directions. | Adaptive Influence Maximization under General Feedback Models | 9,640 |
Understanding patronage networks in Chinese Bureaucracy helps us quantify the promotion mechanism underlying autocratic political systems. Although there are qualitative studies analyzing political promotions, few use quantitative methods to model promotions and make inferences on the fitted mathematical model. Using publicly available datasets, we implement network analysis techniques to advance scholarly understanding of patronage networks in autocratic regimes, using the Chinese bureaucracy as an example. Using graph-based and non-graph-based features, we design three studies to examine drivers of political promotions. We find that careers of politicians are closely associated with their genders, home origins, and positions in the patronage networks. | Uncovering Political Promotion in China: A Network Analysis of Patronage
Relationship in Autocracy | 9,641 |
The ease of use of the Internet has enabled violent extremists such as the Islamic State of Iraq and Syria (ISIS) to easily reach large audience, build personal relationships and increase recruitment. Social media are primarily based on the reports they receive from their own users to mitigate the problem. Despite efforts of social media in suspending many accounts, this solution is not guaranteed to be effective, because not all extremists are caught this way, or they can simply return with another account or migrate to other social networks. In this paper, we design an automatic detection scheme that using as little as three groups of information related to usernames, profile, and textual content of users, determines whether or not a given username belongs to an extremist user. We first demonstrate that extremists are inclined to adopt usernames that are similar to the ones that their like-minded have adopted in the past. We then propose a detection framework that deploys features which are highly indicative of potential online extremism. Results on a real-world ISIS-related dataset from Twitter demonstrate the effectiveness of the methodology in identifying extremist users. | Detection of Violent Extremists in Social Media | 9,642 |
Community or modular structure is considered to be a significant property of large scale real-world graphs such as social or information networks. Detecting influential clusters or communities in these graphs is a problem of considerable interest as it often accounts for the functionality of the system. We aim to provide a thorough exposition of the topic, including the main elements of the problem, a brief introduction of the existing research for both disjoint and overlapping community search, the idea of influential communities, its implications and the current state of the art and finally provide some insight on possible directions for future research. | Literature Survey on Finding Influential Communities in Large Scale
Networks | 9,643 |
Over the past years, political events and public opinion on the Web have been allegedly manipulated by accounts dedicated to spreading disinformation and performing malicious activities on social media. These accounts hereafter referred to as "Pathogenic Social Media (PSM)" accounts, are often controlled by terrorist supporters, water armies or fake news writers and hence can pose threats to social media and general public. Understanding and analyzing PSMs could help social media firms devise sophisticated and automated techniques that could be deployed to stop them from reaching their audience and consequently reduce their threat. In this paper, we leverage the well-known statistical technique "Hawkes Process" to quantify the influence of PSM accounts on the dissemination of malicious information on social media platforms. Our findings on a real-world ISIS-related dataset from Twitter indicate that PSMs are significantly different from regular users in making a message viral. Specifically, we observed that PSMs do not usually post URLs from mainstream news sources. Instead, their tweets usually receive large impact on audience, if contained URLs from Facebook and alternative news outlets. In contrary, tweets posted by regular users receive nearly equal impression regardless of the posted URLs and their sources. Our findings can further shed light on understanding and detecting PSM accounts. | Hawkes Process for Understanding the Influence of Pathogenic Social
Media Accounts | 9,644 |
Recent research brought awareness of the issue of bots on social media and the significant risks of mass manipulation of public opinion in the context of political discussion. In this work, we leverage Twitter to study the discourse during the 2018 US midterm elections and analyze social bot activity and interactions with humans. We collected 2.6 million tweets for 42 days around the election day from nearly 1 million users. We use the collected tweets to answer three research questions: (i) Do social bots lean and behave according to a political ideology? (ii) Can we observe different strategies among liberal and conservative bots? (iii) How effective are bot strategies? We show that social bots can be accurately classified according to their political leaning and behave accordingly. Conservative bots share most of the topics of discussion with their human counterparts, while liberal bots show less overlap and a more inflammatory attitude. We studied bot interactions with humans and observed different strategies. Finally, we measured bots embeddedness in the social network and the effectiveness of their activities. Results show that conservative bots are more deeply embedded in the social network and more effective than liberal bots at exerting influence on humans. | Red Bots Do It Better: Comparative Analysis of Social Bot Partisan
Behavior | 9,645 |
Network alignment aims at inferring a set of anchor links matching the shared entities between different information networks, which has become a prerequisite step for effective fusion of multiple information networks. In this paper, we will study the network alignment problem to fuse online social networks specifically. Social network alignment is extremely challenging to address due to several reasons, i.e., lack of training data, network heterogeneity and one-to-one constraint. Existing network alignment works usually require a large number of training data, but such a demand can hardly be met in applications, as manual anchor link labeling is extremely expensive. Significantly different from other homogeneous network alignment works, information in online social networks is usually of heterogeneous categories, the incorporation of which in model building is not an easy task. Furthermore, the one-to-one cardinality constraint on anchor links renders their inference process intertwistingly correlated. To resolve these three challenges, a novel network alignment model, namely ActiveIter, is introduced in this paper. ActiveIter defines a set of inter-network meta diagrams for anchor link feature extraction, adopts active learning for effective label query and uses greedy link selection for anchor link cardinality filtering. Extensive experiments are conducted on real-world aligned networks datasets, and the experimental results have demonstrated the effectiveness of ActiveIter compared with other state-of-the-art baseline methods. | Meta Diagram based Active Social Networks Alignment | 9,646 |
During the summer of 2018, Facebook, Google, and Twitter created policies and implemented transparent archives that include U.S. political advertisements which ran on their platforms. Through our analysis of over 1.3 million ads with political content, we show how different types of political advertisers are disseminating U.S. political messages using Facebook, Google, and Twitter's advertising platforms. We find that in total, ads with political content included in these archives have generated between 8.67 billion - 33.8 billion impressions and that sponsors have spent over $300 million USD on advertising with U.S. political content. We are able to improve our understanding of political advertisers on these platforms. We have also discovered a significant amount of advertising by quasi for-profit media companies that appeared to exist for the sole purpose of creating deceptive online communities focused on spreading political messaging and not for directly generating profits. Advertising by such groups is a relatively recent phenomenon, and appears to be thriving on online platforms due to the lower regulatory requirements compared to traditional advertising platforms. We have found through our attempts to collect and analyze this data that there are many limitations and weaknesses that enable intentional or accidental deception and bypassing of the current implementations of these transparency archives. We provide several suggestions for how these archives could be made more robust and useful. Overall, these efforts by Facebook, Google, and Twitter have improved political advertising transparency of honest and, in some cases, possibly dishonest advertisers on their platforms. We thank the people at these companies who have built these archives and continue to improve them. | An Analysis of United States Online Political Advertising Transparency | 9,647 |
Social coding platforms, such as GitHub, can serve as natural laboratories for studying the diffusion of innovation through tracking the pattern of code adoption by programmers. This paper focuses on the problem of predicting the popularity of software repositories over time; our aim is to forecast the time series of popularity-related events (code forks and watches). In particular, we are interested in cross-repository patterns-how do events on one repository affect other repositories? Our proposed LSTM (Long Short-Term Memory) recurrent neural network integrates events across multiple active repositories, outperforming a standard ARIMA (Auto-Regressive Integrated Moving Average) time series prediction based on the single repository. The ability of the LSTM to leverage cross-repository information gives it a significant edge over standard time series forecasting. | A Cross-Repository Model for Predicting Popularity in GitHub | 9,648 |
In this paper we introduce the concept of network semantic segmentation for social network analysis. We consider the GitHub social coding network which has been a center of attention for both researchers and software developers. Network semantic segmentation describes the process of associating each user with a class label such as a topic of interest. We augment node attributes with network significant connections and then employ machine learning approaches to cluster the users. We compare the results with a network segmentation performed using community detection algorithms and one executed by clustering with node attributes. Results are compared in terms of community diversity within the semantic segments along with topic | Network Semantic Segmentation with Application to GitHub | 9,649 |
Social sciences were built from comparison methods assembling field works and data, either quantitative or qualitative. Big Data offers new opportunities to extend this requirement to build commensurable data sets. The paper tells the story of the two previous quantification eras (census and polls) in order to demonstrate the need for a new agency to be considered as the target of this new generation of social sciences: that of objects as Actor Network Theory proposed and of replications that propagate all over the digital networks. The case study of Latour's topofil of Boa Vista is revisited to explore how a qualitative method dedicated to comparison and an Actor Network Theory approach extended in a replication theory may offer new insights from any field study and may use the digital ressources to do so. | Replications in quantitative and qualitative methods: a new era for
commensurable digital social sciences | 9,650 |
The proliferation of smart mobile devices has spurred an explosive growth of mobile crowd-learning services, where service providers rely on the user community to voluntarily collect, report, and share real-time information for a collection of scattered points of interest. A critical factor affecting the future large-scale adoption of such mobile crowd-learning applications is the freshness of the crowd-learned information, which can be measured by a metric termed ``age-of-information'' (AoI). However, we show that the AoI of mobile crowd-learning could be arbitrarily bad under selfish users' behaviors if the system is poorly designed. This motivates us to design efficient reward mechanisms to incentivize mobile users to report information in time, with the goal of keeping the AoI and congestion level of each PoI low. Toward this end, we consider a simple linear AoI-based reward mechanism and analyze its AoI and congestion performances in terms of price of anarchy (PoA), which characterizes the degradation of the system efficiency due to selfish behavior of users. Remarkably, we show that the proposed mechanism achieves the optimal AoI performance asymptotically in a deterministic scenario. Further, we prove that the proposed mechanism achieves a bounded PoA in general stochastic cases, and the bound only depends on system parameters. Particularly, when the service rates of PoIs are symmetric in stochastic cases, the achieved PoA is upper-bounded by $1/2$ asymptotically. Collectively, this work advances our understanding of information freshness in mobile crowd-learning systems. | Can We Achieve Fresh Information with Selfish Users in Mobile
Crowd-Learning? | 9,651 |
This paper utilizes a mixture of qualitative, formal, and statistical socio-semantic network analyses to examine how cultural homophily works when field logic meets practice. On the one hand, because individuals in similar field positions are also imposed with similar cultural orientations, cultural homophily reproduces objective field structure in intersubjective social network ties. On the other hand, fields are operative in practice and to accomplish pragmatic goals individuals who occupy different field positions often join in groups, creatively reinterpret the field-imposed cultural orientations, and produce cultural similarities alternative to the position-specific ones. Drawing on these emergent similarities, the cultural homophily mechanism might stimulate social network ties between members who occupy not the same but different field positions, thus contesting fields. I examine this ambivalent role of cultural homophily in two creative collectives, each embracing members positioned closer to the opposite poles of the field of cultural production. I find different types of cultural similarities to affect different types of social network ties within and between the field positions: Similarity of vocabularies stimulates friendship and collaboration ties within positions, thus reproducing the field, while affiliation with the same cultural structures stimulates collaboration ties between positions, thus contesting the field. The latter effect is visible under statistical analysis of ethnographic data, but easy to oversee in qualitative analysis of texts because informants tend to flag conformity to their positions in their explicit statements. This highlights the importance of mixed socio-semantic network analysis, both sensitive to the local context and capable of unveiling the mechanisms underlying the interplay between the cultural and the social. | The Ambivalence of Cultural Homophily: Field Positions, Semantic
Similarities, and Social Network Ties in Creative Collectives | 9,652 |
With exponential increase in the availability oftelemetry / streaming / real-time data, understanding contextualbehavior changes is a vital functionality in order to deliverunrivalled customer experience and build high performance andhigh availability systems. Real-time behavior change detectionfinds a use case in number of domains such as social networks,network traffic monitoring, ad exchange metrics etc. In streamingdata, behavior change is an implausible observation that does notfit in with the distribution of rest of the data. A timely and preciserevelation of such behavior changes can give us substantialinformation about the system in critical situations which can bea driving factor for vital decisions. Detecting behavior changes instreaming fashion is a difficult task as the system needs to processhigh speed real-time data and continuously learn from data alongwith detecting anomalies in a single pass of data. In this paperwe introduce a novel algorithm called Accountable BehaviorChange Detection (VEDAR) which can detect and elucidate thebehavior changes in real-time and operates in a fashion similarto human perception. We have bench marked our algorithmon open source anomaly detection datasets. We have benchmarked our algorithm by comparing its performance on opensource anomaly datasets against industry standard algorithmslike Numenta HTM and Twitter AdVec (SH-ESD). Our algorithmoutperforms above mentioned algorithms for behaviour changedetection, efficacy is given in section V. | VEDAR: Accountable Behavioural Change Detection | 9,653 |
Modelling relationships between entities in real-world systems with a simple graph is a standard approach. However, reality is better embraced as several interdependent subsystems (or layers). Recently the concept of a multilayer network model has emerged from the field of complex systems. This model can be applied to a wide range of real-world datasets. Examples of multilayer networks can be found in the domains of life sciences, sociology, digital humanities and more. Within the domain of graph visualization there are many systems which visualize datasets having many characteristics of multilayer graphs. This report provides a state of the art and a structured analysis of contemporary multilayer network visualization, not only for researchers in visualization, but also for those who aim to visualize multilayer networks in the domain of complex systems, as well as those developing systems across application domains. We have explored the visualization literature to survey visualization techniques suitable for multilayer graph visualization, as well as tools, tasks, and analytic techniques from within application domains. This report also identifies the outstanding challenges for multilayer graph visualization and suggests future research directions for addressing them. | The State of the Art in Multilayer Network Visualization | 9,654 |
Online social networks have become the medium for efficient viral marketing exploiting social influence in information diffusion. However, the emerging application Social Coupon (SC) incorporating social referral into coupons cannot be efficiently solved by previous researches which do not take into account the effect of SC allocation. The number of allocated SCs restricts the number of influenced friends for each user. In the paper, we investigate not only the seed selection problem but also the effect of SC allocation for optimizing the redemption rate which represents the efficiency of SC allocation. Accordingly, we formulate a problem named Seed Selection and SC allocation for Redemption Maximization (S3CRM) and prove the hardness of S3CRM. We design an effective algorithm with a performance guarantee, called Seed Selection and Social Coupon allocation algorithm. For S3CRM, we introduce the notion of marginal redemption to evaluate the efficiency of investment in seeds and SCs. Moreover, for a balanced investment, we develop a new graph structure called guaranteed path, to explore the opportunity to optimize the redemption rate. Finally, we perform a comprehensive evaluation on our proposed algorithm with various baselines. The results validate our ideas and show the effectiveness of the proposed algorithm over baselines. | Seed Selection and Social Coupon Allocation for Redemption Maximization
in Online Social Networks | 9,655 |
Online social networks are used to diffuse opinions and ideas among users, enabling a faster communication and a wider audience. The way in which opinions are conditioned by social interactions is usually called social influence. Social influence is extensively used during political campaigns to advertise and support candidates. Herein we consider the problem of exploiting social influence in a network of voters in order to change their opinion about a target candidate with the aim of increasing his chance to win/lose the election in a wide range of voting systems. We introduce the Linear Threshold Ranking, a natural and powerful extension of the well-established Linear Threshold Model, which describes the change of opinions taking into account the amount of exercised influence. We are able to maximize the score of a target candidate up to a factor of $1-1/e$ by showing submodularity. We exploit such property to provide a $\frac{1}{3}(1-1/e)$-approximation algorithm for the constructive election control problem. Similarly, we get a $\frac{1}{2}(1-1/e)$-approximation ratio in the destructive scenario. The algorithm can be used in arbitrary scoring rule voting systems, including plurality rule and borda count. Finally, we perform an experimental study on real-world networks, measuring Probability of Victory (PoV) and Margin of Victory (MoV) of the target candidate, to validate the model and to test the capability of the algorithm. | Exploiting Social Influence to Control Elections Based on Scoring Rules | 9,656 |
It is a recurring problem of online communication that the properties of unknown people are hard to assess. This may lead to various issues such as the spread of `fake news' from untrustworthy sources. In sociology the sum of (social) resources available to a person through their social network is often described as social capital. In this article, we look at social capital from a different angle. Instead of evaluating the advantage that people have because of their membership in a certain group, we investigate various ways to infer the social capital a person adds or may add to the network, their contributive social capital (CSC). As there is no consensus in the literature on what the social capital of a person exactly consists of, we look at various related properties: expertise, reputation, trustworthiness, and influence. The analysis of these features is investigated for five different sources of online data: microblogging (e.g., Twitter), social networking platforms (e.g., Facebook), direct communication (e.g., email), scientometrics, and threaded discussion boards (e.g., Reddit). In each field we discuss recent publications and put a focus on the data sources used, the algorithms implemented, and the performance evaluation. The findings are compared and set in context to contributive social capital extraction. The analysis algorithms are based on individual features (e.g., followers on Twitter), ratios thereof, or a person's centrality measures (e.g., PageRank). The machine learning approaches, such as straightforward classifiers (e.g., support vector machines) use ground truths that are connected to social capital. The discussion of these methods is intended to facilitate research on the topic by identifying relevant data sources and the best suited algorithms, and by providing tested methods for the evaluation of findings. | Contributive Social Capital Extraction From Different Types of Online
Data Sources | 9,657 |
Tweets about everyday events are published on Twitter. Detecting such events is a challenging task due to the diverse and noisy contents of Twitter. In this paper, we propose a novel approach named Weighted Dynamic Heartbeat Graph (WDHG) to detect events from the Twitter stream. Once an event is detected in a Twitter stream, WDHG suppresses it in later stages, in order to detect new emerging events. This unique characteristic makes the proposed approach sensitive to capture emerging events efficiently. Experiments are performed on three real-life benchmark datasets: FA Cup Final 2012, Super Tuesday 2012, and the US Elections 2012. Results show considerable improvement over existing event detection methods in most cases. | Event Detection in Twitter Stream using Weighted Dynamic Heartbeat Graph
Approach | 9,658 |
Whether we recognize it or not, the Internet is rife with exciting and original institutional forms that are transforming social organization on and offline. Issues of governance in these Internet platforms and other digital institutions have posed a challenge for software engineers, many of whom have little exposure to the relevant history or theory of institutional design. Here, we offer one useful framework with an aim to stimulate dialogue between computer scientists and political scientists. The dominant guiding practices for the design of digital institutions to date in human-computer interaction, computer-supported cooperative work, and the tech industry at large have been an incentive-focused behavioral engineering paradigm, a collection of atheoretical approaches such as A/B-testing, and incremental issue-driven software engineering. One institutional analysis framework that has been useful in the design of traditional institutions is the body of resource governance literature known as the "Ostrom Workshop". A key finding of this literature that has yet to be broadly incorporated in the design of many digital institutions is the importance of including participatory change process mechanisms in what is called a "constitutional layer" of institutional design---in other words, defining rules that allow and facilitate diverse stakeholder participation in the ongoing process of institutional design change. We explore to what extent this consideration is met or could be better met in three varied cases of digital institutions: cryptocurrencies, cannabis informatics, and amateur Minecraft server governance. Examining such highly varied cases allows us to demonstrate the broad relevance of constitutional layers in many different types of digital institutions. | Designing for Participation and Change in Digital Institutions | 9,659 |
User identity linkage across online social networks is an emerging research topic that has attracted attention in recent years. Many user identity linkage methods have been proposed so far and most of them utilize user profile, content and network information to determine if two social media accounts belong to the same person. In most cases, user identity linkage methods are evaluated by performing some prediction tasks with the results presented using some overall accuracy measures. However, the methods are rarely compared at the individual user level where a predicted matched (or linked) pair of user identities from different online social networks can be visually compared in terms of user profile (e.g. username), content and network information. Such a comparison is critical to determine the relative strengths and weaknesses of each method. In this work, we present Linky, a visual analytical tool which extracts the results from different user identity linkage methods performed on multiple online social networks and visualizes the user profiles, content and ego networks of the linked user identities. Linky is designed to help researchers to (a) inspect the linked user identities at the individual user level, (b) compare results returned by different user linkage methods, and (c) provide a preliminary empirical understanding on which aspects of the user identities, e.g. profile, content or network, contributed to the user identity linkage results. | Linky: Visualizing User Identity Linkage Results For Multiple Online
Social Networks | 9,660 |
Information diffusion is usually modeled as a process in which immutable pieces of information propagate over a network. In reality, however, messages are not immutable, but may be morphed with every step, potentially entailing large cumulative distortions. This process may lead to misinformation even in the absence of malevolent actors, and understanding it is crucial for modeling and improving online information systems. Here, we perform a controlled, crowdsourced experiment in which we simulate the propagation of information from medical research papers. Starting from the original abstracts, crowd workers iteratively shorten previously produced summaries to increasingly smaller lengths. We also collect control summaries where the original abstract is compressed directly to the final target length. Comparing cascades to controls allows us to separate the effect of the length constraint from that of accumulated distortion. Via careful manual coding, we annotate lexical and semantic units in the medical abstracts and track them along cascades. We find that iterative summarization has a negative impact due to the accumulation of error, but that high-quality intermediate summaries result in less distorted messages than in the control case. Different types of information behave differently; in particular, the conclusion of a medical abstract (i.e., its key message) is distorted most. Finally, we compare abstractive with extractive summaries, finding that the latter are less prone to semantic distortion. Overall, this work is a first step in studying information cascades without the assumption that disseminated content is immutable, with implications on our understanding of the role of word-of-mouth effects on the misreporting of science. | Message Distortion in Information Cascades | 9,661 |
Mobile Social Networks (MSNs) have been evolving and enabling various fields in recent years. Recent advances in mobile edge computing, caching, and device-to-device communications, can have significant impacts on 5G systems. In those settings, identifying central users is crucial. It can provide important insights into designing and deploying diverse services and applications. However, it is challenging to evaluate the centrality of nodes in MSNs with dynamic environments. In this paper, we propose a Social-Relation based Centrality (SoReC) measure, in which social network information is used to quantify the influence of each user in MSNs. We first introduce a new metric to estimate direct social relations among users via direct contacts, and then extend the metric to explore indirect social relations among users bridging by the third parties. Based on direct and indirect social relations, we detect the influence spheres of users and quantify their influence in the networks. Simulations on real-world networks show that the proposed measure can perform well in identifying future influential users in MSNs. | SoReC: A Social-Relation Based Centrality Measure in Mobile Social
Networks | 9,662 |
Disengagement and disenchantment with the Parliamentary process is an important concern in today's Western democracies. Members of Parliament (MPs) in the UK are therefore seeking new ways to engage with citizens, including being on digital platforms such as Twitter. In recent years, nearly all (579 out of 650) MPs have created Twitter accounts, and have amassed huge followings comparable to a sizable fraction of the country's population. This paper seeks to shed light on this phenomenon by examining the volume and nature of the interaction between MPs and citizens. We find that although there is an information overload on MPs, attention on individual MPs is focused during small time windows when something topical may be happening relating to them. MPs manage their interaction strategically, replying selectively to UK-based citizens and thereby serving in their role as elected representatives, and using retweets to spread their party's message. Most promisingly, we find that Twitter opens up new avenues with substantial volumes of cross-party interaction, between MPs of one party and citizens who support (follow) MPs of other parties. | Tweeting MPs: Digital Engagement between Citizens and Members of
Parliament in the UK | 9,663 |
The concept of `fake news' has been referenced and thrown around in news reports so much in recent years that it has become a news topic in its own right. At its core, it poses a chilling question -- what do we do if our worldview is fundamentally wrong? Even if internally consistent, what if it does not match the real world? Are our beliefs justified, or could we become indoctrinated from living in a `bubble'? If the latter is true, how could we even test the limits of said bubble from within its confines? We propose a new method to augment the process of identifying fake news, by speeding up and automating the more cumbersome and time-consuming tasks involved. Our application, NewsCompare takes any list of target websites as input (news-related in our use case, but otherwise not restricted), visits them in parallel and retrieves any text content found within. Web pages are subsequently compared to each other, and similarities are tentatively pointed out. These results can be manually verified in order to determine which websites tend to draw inspiration from one another. The data gathered on every intermediate step can be queried and analyzed separately, and most notably we already use the set of hyperlinks to and from the various websites we encounter to paint a sort of `map' of that particular slice of the web. This map can then be cross-referenced and further strengthen the conclusion that a particular grouping of sites with strong links to each other, and posting similar content, are likely to share the same allegiance. We run our application on the Romanian news websites and we draw several interesting observations. | NewsCompare - a novel application for detecting news influence in a
country | 9,664 |
We study the structure of heterosexual dating markets in the United States through an analysis of the interactions of several million users of a large online dating web site, applying recently developed network analysis methods to the pattern of messages exchanged among users. Our analysis shows that the strongest driver of romantic interaction at the national level is simple geographic proximity, but at the local level other demographic factors come into play. We find that dating markets in each city are partitioned into submarkets along lines of age and ethnicity. Sex ratio varies widely between submarkets, with younger submarkets having more men and fewer women than older ones. There is also a noticeable tendency for minorities, especially women, to be younger than the average in older submarkets, and our analysis reveals how this kind of racial stratification arises through the messaging decisions of both men and women. Our study illustrates how network techniques applied to online interactions can reveal the aggregate effects of individual behavior on social structure. | Structure of online dating markets in US cities | 9,665 |
In the field of social networking services, finding similar users based on profile data is common practice. Smartphones harbor sensor and personal context data that can be used for user profiling. Yet, one vast source of personal data, that is text messaging data, has hardly been studied for user profiling. We see three reasons for this: First, private text messaging data is not shared due to their intimate character. Second, the definition of an appropriate privacy-preserving similarity measure is non-trivial. Third, assessing the quality of a similarity measure on text messaging data representing a potentially infinite set of topics is non-trivial. In order to overcome these obstacles we propose affinity, a system that assesses the similarity between text messaging histories of users reliably and efficiently in a privacy-preserving manner. Private texting data stays on user devices and data for comparison is compared in a latent format that neither allows to reconstruct the comparison words nor any original private plain text. We evaluate our approach by calculating similarities between Twitter histories of 60 US senators. The resulting similarity network reaches an average 85.0% accuracy on a political party classification task. | affinity: A System for Latent User Similarity Comparison on Texting Data | 9,666 |
We present a highly effective unsupervised framework for detecting the stance of prolific Twitter users with respect to controversial topics. In particular, we use dimensionality reduction to project users onto a low-dimensional space, followed by clustering, which allows us to find core users that are representative of the different stances. Our framework has three major advantages over pre-existing methods, which are based on supervised or semi-supervised classification. First, we do not require any prior labeling of users: instead, we create clusters, which are much easier to label manually afterwards, e.g., in a matter of seconds or minutes instead of hours. Second, there is no need for domain- or topic-level knowledge either to specify the relevant stances (labels) or to conduct the actual labeling. Third, our framework is robust in the face of data skewness, e.g., when some users or some stances have greater representation in the data. We experiment with different combinations of user similarity features, dataset sizes, dimensionality reduction methods, and clustering algorithms to ascertain the most effective and most computationally efficient combinations across three different datasets (in English and Turkish). We further verified our results on additional tweet sets covering six different controversial topics. Our best combination in terms of effectiveness and efficiency uses retweeted accounts as features, UMAP for dimensionality reduction, and Mean Shift for clustering, and yields a small number of high-quality user clusters, typically just 2--3, with more than 98\% purity. The resulting user clusters can be used to train downstream classifiers. Moreover, our framework is robust to variations in the hyper-parameter values and also with respect to random initialization. | Unsupervised User Stance Detection on Twitter | 9,667 |
As the problem of drug abuse intensifies in the U.S., many studies that primarily utilize social media data, such as postings on Twitter, to study drug abuse-related activities use machine learning as a powerful tool for text classification and filtering. However, given the wide range of topics of Twitter users, tweets related to drug abuse are rare in most of the datasets. This imbalanced data remains a major issue in building effective tweet classifiers, and is especially obvious for studies that include abuse-related slang terms. In this study, we approach this problem by designing an ensemble deep learning model that leverages both word-level and character-level features to classify abuse-related tweets. Experiments are reported on a Twitter dataset, where we can configure the percentages of the two classes (abuse vs. non abuse) to simulate the data imbalance with different amplitudes. Results show that our ensemble deep learning models exhibit better performance than ensembles of traditional machine learning models, especially on heavily imbalanced datasets. | An Ensemble Deep Learning Model for Drug Abuse Detection in Sparse
Twitter-Sphere | 9,668 |
As one of the classic models that describe the belief dynamics over social networks, a non-Bayesian social learning model assumes that members in the network possess accurate signal knowledge through the process of Bayesian inference. In order to make the non-Bayesian social learning model more applicable to human and animal societies, this paper extended this model by assuming the existence of private signal structure bias. Each social member in each time step uses an imperfect signal knowledge to form its Bayesian part belief and then incorporates its neighbors' beliefs into this Bayesian part belief to form a new belief report. First, we investigated the intrinsic learning ability of an isolated agent and deduced the conditions that the signal structure needs to satisfy for this isolated agent to make an eventually correct decision. According to these conditions, agents' signal structures were further divided into three different types, "conservative," "radical," and "negative." Then, we switched the context from isolated agents to a connected network; our propositions and simulations show that the conservative agents are the dominant force for the social network to learn the real state, while the other two types might prevent the network from successful learning. Although fragilities do exist in non-Bayesian social learning mechanism, "be more conservative" and "avoid overconfidence" could be effective strategies for each agent in the real social networks to collectively improve social learning processes and results. | Non-Bayesian Social Learning with Imperfect Private Signal Structure | 9,669 |
The purpose of this research is to study the possibility of identifying students, statistically, by analyzing their behavior in different consecutive activities. In this project, there are three different sorts of activities: animated example, basic example, and parameterized exercises. We extracted the behavior of each student from the log activities of the Mastery Grids platform. Additionally, we investigate by using unsupervised learning technique, whether there are common patterns, that students share or not while performing these activities. We conclude that we are able to identify students from their behavior, besides that there are some common patterns. | Sequence Analysis of Learning Behavior in Different Consecutive
Activities | 9,670 |
Measuring the impact and success of human performance is common in various disciplines, including art, science, and sports. Quantifying impact also plays a key role on social media, where impact is usually defined as the reach of a user's content as captured by metrics such as the number of views, likes, retweets, or shares. In this paper, we study entire careers of Twitter users to understand properties of impact. We show that user impact tends to have certain characteristics: First, impact is clustered in time, such that the most impactful tweets of a user appear close to each other. Second, users commonly have 'hot streaks' of impact, i.e., extended periods of high-impact tweets. Third, impact tends to gradually build up before, and fall off after, a user's most impactful tweet. We attempt to explain these characteristics using various properties measured on social media, including the user's network, content, activity, and experience, and find that changes in impact are associated with significant changes in these properties. Our findings open interesting avenues for future research on virality and influence on social media. | Hot Streaks on Social Media | 9,671 |
In this dataset paper, we present a three-stage process to collect Reddit comments that are removed comments by moderators of several subreddits, for violating subreddit rules and guidelines. Other than the fact that these comments were flagged by moderators for violating community norms, we do not have any other information regarding the nature of the violations. Through this procedure, we collect over 2M comments removed by moderators of 100 different Reddit communities, and publicly release the data. Working with this dataset of removed comments, we identify 8 macro norms---norms that are widely enforced on most parts of Reddit. We extract these macro norms by employing a hybrid approach---classification, topic modeling, and open-coding---on comments identified to be norm violations within at least 85 out of the 100 study subreddits. Finally, we label over 40K Reddit comments removed by moderators according to the specific type of macro norm being violated, and make this dataset publicly available. By breaking down a collection of removed comments into more granular types of macro norm violation, our dataset can be used to train more nuanced machine learning classifiers for online moderation. | Hybrid Approaches to Detect Comments Violating Macro Norms on Reddit | 9,672 |
In October 2017, numerous women accused producer Harvey Weinstein of sexual harassment. Their stories encouraged other women to voice allegations of sexual harassment against many high profile men, including politicians, actors, and producers. These events are broadly referred to as the #MeToo movement, named for the use of the hashtag "#metoo" on social media platforms like Twitter and Facebook. The movement has widely been referred to as "empowering" because it has amplified the voices of previously unheard women over those of traditionally powerful men. In this work, we investigate dynamics of sentiment, power and agency in online media coverage of these events. Using a corpus of online media articles about the #MeToo movement, we present a contextual affective analysis---an entity-centric approach that uses contextualized lexicons to examine how people are portrayed in media articles. We show that while these articles are sympathetic towards women who have experienced sexual harassment, they consistently present men as most powerful, even after sexual assault allegations. While we focus on media coverage of the #MeToo movement, our method for contextual affective analysis readily generalizes to other domains. | Contextual Affective Analysis: A Case Study of People Portrayals in
Online #MeToo Stories | 9,673 |
Social Internet of Things are changing what social patterns can be, and will bring unprecedented online and offline social experiences. Social cloud is an improvement over social network in order to cooperatively provide computing facilities through social interactions. Both of these two field needs more research efforts to have a generic or unified supporting architecture, in order to integrate with various involved technologies. These two paradigms are both related to Social Networks, Cloud Computing, and Internet of Things. Therefore, we have reasons to believe that they have many potentials to support each other, and we predict that the two will be merged in one way or another. | Enabling the Social Internet of Things and Social Cloud | 9,674 |
The field of network science is a highly interdisciplinary area; for the empirical analysis of network data, it draws algorithmic methodologies from several research fields. Hence, research procedures and descriptions of the technical results often differ, sometimes widely. In this paper we focus on methodologies for the experimental part of algorithm engineering for network analysis -- an important ingredient for a research area with empirical focus. More precisely, we unify and adapt existing recommendations from different fields and propose universal guidelines -- including statistical analyses -- for the systematic evaluation of network analysis algorithms. This way, the behavior of newly proposed algorithms can be properly assessed and comparisons to existing solutions become meaningful. Moreover, as the main technical contribution, we provide SimexPal, a highly automated tool to perform and analyze experiments following our guidelines. To illustrate the merits of SimexPal and our guidelines, we apply them in a case study: we design, perform, visualize and evaluate experiments of a recent algorithm for approximating betweenness centrality, an important problem in network analysis. In summary, both our guidelines and SimexPal shall modernize and complement previous efforts in experimental algorithmics; they are not only useful for network analysis, but also in related contexts. | Guidelines for Experimental Algorithmics in Network Analysis | 9,675 |
Property Technology (PropTech) is the next big thing that is going to disrupt the real estate market. Nowadays, we see applications of Machine Learning (ML) and Artificial Intelligence (AI) in almost all the domains but for a long time the real estate industry was quite slow in adopting data science and machine learning for problem solving and improving their processes. However, things are changing quite fast as we see a lot of adoption of AI and ML in the US and European real estate markets. But the Indian real estate market has to catch-up a lot. This paper proposes a machine learning approach for solving the house price prediction problem in the classified advertisements. This study focuses on the Indian real estate market. We apply advanced machine learning algorithms such as Random forest, Gradient boosting and Artificial neural networks on a real world dataset and compare the performance of these methods. We find that the Random forest method is the best performer in terms of prediction accuracy. | PropTech for Proactive Pricing of Houses in Classified Advertisements in
the Indian Real Estate Market | 9,676 |
The rapid evolution of technology is a preferred method of interacting. A new world was created for young people who are sending emails, visiting websites, using webcams and chat rooms, and instant messaging through social media. In the past years, people used face-to-face communication though, in recent years, people are using internet technology in order to communicate with each other. As a consequence the communication change, created a new type of bullying, cyberbullying, in which bullying is taking place by using internet technology. Cyberbullying is a phenomenon which is increasing day by day over the world. This was one of the reasons that prompted us to do this survey. Furthermore, bullies aggressive reaction is affected by four factors which are conductive during childhood. | Cyberbullying and Traditional Bullying in Greece: An Empirical Study | 9,677 |
Wikipedia serves as a good example of how editors collaborate to form and maintain an article. The relationship between editors, derived from their sequence of editing activity, results in a directed network structure called the revision network, that potentially holds valuable insights into editing activity. In this paper we create revision networks to assess differences between controversial and non-controversial articles, as labelled by Wikipedia. Originating from complex networks, we apply motif analysis, which determines the under or over-representation of induced sub-structures, in this case triads of editors. We analyse 21,631 Wikipedia articles in this way, and use principal component analysis to consider the relationship between their motif subgraph ratio profiles. Results show that a small number of induced triads play an important role in characterising relationships between editors, with controversial articles having a tendency to cluster. This provides useful insight into editing behaviour and interaction capturing counter-narratives, without recourse to semantic analysis. It also provides a potentially useful feature for future prediction of controversial Wikipedia articles. | Understanding the Signature of Controversial Wikipedia Articles through
Motifs in Editor Revision Networks | 9,678 |
Community-based Question and Answering (CQA) platforms are nowadays enlightening over a billion people with crowdsourced knowledge. A key design issue in CQA platforms is how to find the potential answerers and to provide the askers timely and suitable answers, i.e., the so-called \textit{question routing} problem. State-of-art approaches often rely on extracting topics from the question texts. In this work, we analyze the question routing problem in a CQA system named Farm-Doctor that is exclusive for agricultural knowledge. The major challenge is that its questions contain limited textual information. To this end, we conduct an extensive measurement and obtain the whole knowledge repository of Farm-Doctor that consists of over 690 thousand questions and over 3 million answers. To remedy the text deficiency, we model Farm-Doctor as a heterogeneous information network that incorporates rich side information and based on network representation learning models we accurately recommend for each question the users that are highly likely to answer it. With an average income of fewer than 6 dollars a day, over 300 thousands farmers in China seek online in Farm-Doctor for agricultural advices. Our method helps these less eloquent farmers with their cultivation and hopefully provides a way to improve their lives. | Cultivating Online: Question Routing in a Question and Answering
Community for Agriculture | 9,679 |
The network embedding problem aims to map nodes that are similar to each other to vectors in a Euclidean space that are close to each other. Like centrality analysis (ranking) and community detection, network embedding is in general considered as an ill-posed problem, and its solution may depend on a person's view on this problem. In this book chapter, we adopt the framework of sampled graphs that treat a person's view as a sampling method for a network. The modularity for a sampled graph, called the generalized modularity in the book chapter, is a similarity matrix that has a specific probabilistic interpretation. One of the main contributions of this book chapter is to propose using the generalized modularity matrix for network embedding and show that the network embedding problem can be treated as a trace maximization problem like the community detection problem. Our generalized modularity embedding approach is very general and flexible. In particular, we show that the Laplacian eigenmaps is a special case of our generalized modularity embedding approach. Also, we show that dimensionality reduction can be done by using a particular sampled graph. Various experiments are conducted on real datasets to illustrate the effectiveness of our approach. | Generalized Modularity Embedding: a General Framework for Network
Embedding | 9,680 |
Developing and middle-income countries increasingly empha-size higher education and entrepreneurship in their long-term develop-ment strategy. Our work focuses on the influence of higher education institutions (HEIs) on startup ecosystems in Brazil, an emerging economy. First, we describe regional variability in entrepreneurial network characteristics. Then we examine the influence of elite HEIs in economic hubs on entrepreneur networks. Second, we investigate the influence ofthe academic trajectories of startup founders, including their courses of study and HEIs of origin, on the fundraising capacity of startups. Given the growing capability of social media databases such as Crunchbase and LinkedIn to provide startup and individual-level data, we draw on computational methods to mine data for social network analysis. We find that HEI quality and the maturity of the ecosystem influence startup success. Our network analysis illustrates that elite HEIs have powerful influences on local entrepreneur ecosystems. Surprisingly, while the most nationally prestigious HEIs in the South and Southeast have the longest geographical reach, their network influence still remains local. | StartupBR: Higher Education's Influence on Social Networks and
Entrepreneurship in Brazil | 9,681 |
The problem of influence maximization is to select the most influential individuals in a social network. With the popularity of social network sites, and the development of viral marketing, the importance of the problem has been increased. The influence maximization problem is NP-hard, and therefore, there will not exist a polynomial-time algorithm to solve the problem unless P=NP. Many heuristics are proposed to find a nearly good solution in a shorter time. In this paper, we propose two heuristic algorithms to find good solutions. The heuristics are based on two ideas: (1) vertices of high degree have more influence in the network, and (2) nearby vertices influence on almost analogous sets of vertices. We evaluate our algorithms on several well-known data sets and show that our heuristics achieve better results (up to $15\%$ in influence spread) for this problem in a shorter time (up to $85\%$ improvement in the running time). | High Quality Degree Based Heuristics for the Influence Maximization
Problem | 9,682 |
Preferential attachment models are a common class of graph models which have been used to explain why power-law distributions appear in the degree sequences of real network data. One of the things they lack, however, is higher-order network clustering, including non-trivial clustering coefficients. In this paper we present a specific Triangle Generalized Preferential Attachment Model (TGPA) that, by construction, has nontrivial clustering. We further prove that this model has a power-law in both the degree distribution and eigenvalue spectra. We use this model to investigate a recent finding that power-laws are more reliably observed in the eigenvalue spectra of real-world networks than in their degree distribution. One conjectured explanation for this is that the spectra of the graph is more robust to various sampling strategies that would have been employed to collect the real-world data compared with the degree distribution. Consequently, we generate random TGPA models that provably have a power-law in both, and sample subgraphs via forest fire, depth-first, and random edge models. We find that the samples show a power-law in the spectra even when only 30\% of the network is seen. Whereas there is a large chance that the degrees will not show a power-law. Our TGPA model shows this behavior much more clearly than a standard preferential attachment model. This provides one possible explanation for why power-laws may be seen frequently in the spectra of real world data. | Triangle Preferential Attachment Has Power-law Degrees and Eigenvalues;
Eigenvalues Are More Stable to Network Sampling | 9,683 |
In this paper we propose and develop a relatively simple and efficient approach for estimating unknown elements of a user-rating matrix in the context of a recommender system (RS). The critical theoretical property of the method is its consistency with respect to arbitrary units implicitly adopted by different users to construct their quantitative ratings of products. It is argued that this property is needed for robust performance accuracy across a broad spectrum of RS application domains. | A Scale-Consistent Approach for Recommender Systems | 9,684 |
The number of posts made by a single user account on a social media platform Twitter in any given time interval is usually quite low. However, there is a subset of users whose volume of posts is much higher than the median. In this paper, we investigate the content diversity and the social neighborhood of these extreme users and others. We define a metric called "interest narrowness", and identify that a subset of extreme users, termed anomalous users, write posts with very low topic diversity, including posts with no text content. Using a few interaction patterns we show that anomalous groups have the strongest within-group interactions, compared to their interaction with others. Further, they exhibit different information sharing behaviors with other anomalous users compared to non-anomalous extreme tweeters. | Social Network of Extreme Tweeters: A Case Study | 9,685 |
Influenza is an acute respiratory infection caused by a virus. It is highly contagious and rapidly mutative. However, its epidemiological characteristics are conventionally collected in terms of outpatient records. In fact, the subjective bias of the doctor emphasizes exterior signs, and the necessity of face-to-face inquiry results in an inaccurate and time-consuming manner of data collection and aggregation. Accordingly, the inferred spectrum of syndromes can be incomplete and lagged. With a massive number of users being sensors, online social media can indeed provide an alternative approach. Voluntary reports in Twitter and its variants can deliver not only exterior signs but also interior feelings such as emotions. These sophisticated signals can further be efficiently collected and aggregated in a real-time manner, and a comprehensive spectrum of syndromes could thus be inferred. Taking Weibo as an example, it is confirmed that a regional spectrum of symptoms can be credibly sensed. Aside from the differences in symptoms and treatment incentives between northern and southern China, it is also surprising that patients in the south are more optimistic, while those in the north demonstrate more intense emotions. The differences sensed from Weibo can even help improve the performance of regressions in monitoring influenza. Our results suggest that self-reports from social media can be profound supplements to the existing clinic-based systems for influenza surveillance. | Same Influenza, Different Responses: Social Media Can Sense a Regional
Spectrum of Symptoms | 9,686 |
Friendship formation is important to online social network sites and to society, but can suffer from informational friction. In this study, we demonstrate that social networks may effectively use an IT-facilitated intervention -- displaying things in common (TIC) between users (mutual hometown, interest, education, work, city) -- to encourage friendship formation. Displaying TIC updates an individual's belief about the shared similarity with another and reduces information friction that may be hard to overcome in offline communication. In collaboration with an online social network, we conduct a randomized field experiment that randomly varies the prominence of different things in common when a user is browsing a non-friend's profile. The dyad-level exogenous variation, orthogonal to any (un)observed structural factors in viewer-profile's network, allows us to cleanly isolate the role of preferences for TIC in driving network formation and homophily. We find that displaying TICs to viewers can significantly increase their probability of sending a friend requests and forming friendships, and is especially effective for pairs of people who have little in common. We also find that displaying TIC can improve friendship formation for a wide range of viewers with different demographics and is more effective when the TICs are more surprising to the viewer. | Displaying Things in Common to Encourage Friendship Formation: A Large
Randomized Field Experiment | 9,687 |
This study details the progress in transportation data analysis with a novel computing framework in keeping with the continuous evolution of the computing technology. The computing framework combines the Labelled Latent Dirichlet Allocation (L-LDA)-incorporated Support Vector Machine (SVM) classifier with the supporting computing strategy on publicly available Twitter data in determining transportation-related events to provide reliable information to travelers. The analytical approach includes analyzing tweets using text classification and geocoding locations based on string similarity. A case study conducted for the New York City and its surrounding areas demonstrates the feasibility of the analytical approach. Approximately 700,010 tweets are analyzed to extract relevant transportation-related information for one week. The SVM classifier achieves more than 85% accuracy in identifying transportation-related tweets from structured data. To further categorize the transportation-related tweets into sub-classes: incident, congestion, construction, special events, and other events, three supervised classifiers are used: L-LDA, SVM, and L-LDA incorporated SVM. Findings from this study demonstrate that the analytical framework, which uses the L-LDA incorporated SVM, can classify roadway transportation-related data from Twitter with over 98.3% accuracy, which is significantly higher than the accuracies achieved by standalone L-LDA and SVM. | Multi-class Twitter Data Categorization and Geocoding with a Novel
Computing Framework | 9,688 |
Community detection in graphs, data clustering, and local pattern mining are three mature fields of data mining and machine learning. In recent years, attributed subgraph mining is emerging as a new powerful data mining task in the intersection of these areas. Given a graph and a set of attributes for each vertex, attributed subgraph mining aims to find cohesive subgraphs for which (a subset of) the attribute values has exceptional values in some sense. While research on this task can borrow from the three abovementioned fields, the principled integration of graph and attribute data poses two challenges: the definition of a pattern language that is intuitive and lends itself to efficient search strategies, and the formalization of the interestingness of such patterns. We propose an integrated solution to both of these challenges. The proposed pattern language improves upon prior work in being both highly flexible and intuitive. We show how an effective and principled algorithm can enumerate patterns of this language. The proposed approach for quantifying interestingness of patterns of this language is rooted in information theory, and is able to account for prior knowledge on the data. Prior work typically quantifies interestingness based on the cohesion of the subgraph and for the exceptionality of its attributes separately, combining these in a parametrized trade-off. Instead, in our proposal this trade-off is implicitly handled in a principled, parameter-free manner. Extensive empirical results confirm the proposed pattern syntax is intuitive, and the interestingness measure aligns well with actual subjective interestingness. | Mining Subjectively Interesting Attributed Subgraphs | 9,689 |
In recent years, malicious information had an explosive growth in social media, with serious social and political backlashes. Recent important studies, featuring large-scale analyses, have produced deeper knowledge about this phenomenon, showing that misleading information spreads faster, deeper and more broadly than factual information on social media, where echo chambers, algorithmic and human biases play an important role in diffusion networks. Following these directions, we explore the possibility of classifying news articles circulating on social media based exclusively on a topological analysis of their diffusion networks. To this aim we collected a large dataset of diffusion networks on Twitter pertaining to news articles published on two distinct classes of sources, namely outlets that convey mainstream, reliable and objective information and those that fabricate and disseminate various kinds of misleading articles, including false news intended to harm, satire intended to make people laugh, click-bait news that may be entirely factual or rumors that are unproven. We carried out an extensive comparison of these networks using several alignment-free approaches including basic network properties, centrality measures distributions, and network distances. We accordingly evaluated to what extent these techniques allow to discriminate between the networks associated to the aforementioned news domains. Our results highlight that the communities of users spreading mainstream news, compared to those sharing misleading news, tend to shape diffusion networks with subtle yet systematic differences which might be effectively employed to identify misleading and harmful information. | Topology comparison of Twitter diffusion networks effectively reveals
misleading information | 9,690 |
Cyberbullying, which often has a deeply negative impact on the victim, has grown as a serious issue in Online Social Networks. Recently, researchers have created automated machine learning algorithms to detect Cyberbullying using social and textual features. However, the very algorithms that are intended to fight off one threat (cyberbullying) may inadvertently be falling prey to another important threat (bias of the automatic detection algorithms). This is exacerbated by the fact that while the current literature on algorithmic fairness has multiple empirical results, metrics, and algorithms for countering bias across immediately observable demographic characteristics (e.g. age, race, gender), there have been no efforts at empirically quantifying the variation in algorithmic performance based on the network role or position of individuals. We audit an existing cyberbullying algorithm using Twitter data for disparity in detection performance based on the network centrality of the potential victim and then demonstrate how this disparity can be countered using an Equalized Odds post-processing technique. The results pave way for more accurate and fair cyberbullying detection algorithms. | Fairness across Network Positions in Cyberbullying Detection Algorithms | 9,691 |
The election control problem through social influence asks to find a set of nodes in a social network of voters to be the starters of a political campaign aiming at supporting a given target candidate. Voters reached by the campaign change their opinions on the candidates. The goal is to shape the diffusion of the campaign in such a way that the chances of victory of the target candidate are maximized. Previous work shows that the problem can be approximated within a constant factor in several models of information diffusion and voting systems, assuming that the controller, i.e., the external agent that starts the campaign, has full knowledge of the preferences of voters. However this information is not always available since some voters might not reveal it. Herein we relax this assumption by considering that each voter is associated with a probability distribution over the candidates. We propose two models in which, when an electoral campaign reaches a voter, this latter modifies its probability distribution according to the amount of influence it received from its neighbors in the network. We then study the election control problem through social influence on the new models: In the first model, under the Gap-ETH, election control cannot be approximated within a factor better than $1/n^{o(1)}$, where $n$ is the number of voters; in the second model, which is a slight relaxation of the first one, the problem admits a constant factor approximation algorithm. | Election Control through Social Influence with Unknown Preferences | 9,692 |
Trust facilitates cooperation and supports positive outcomes in social groups, including member satisfaction, information sharing, and task performance. Extensive prior research has examined individuals' general propensity to trust, as well as the factors that contribute to their trust in specific groups. Here, we build on past work to present a comprehensive framework for predicting trust in groups. By surveying 6,383 Facebook Groups users about their trust attitudes and examining aggregated behavioral and demographic data for these individuals, we show that (1) an individual's propensity to trust is associated with how they trust their groups, (2) smaller, closed, older, more exclusive, or more homogeneous groups are trusted more, and (3) a group's overall friendship-network structure and an individual's position within that structure can also predict trust. Last, we demonstrate how group trust predicts outcomes at both individual and group level such as the formation of new friendship ties. | When Do People Trust Their Social Groups? | 9,693 |
In recent years, Online Social Networks (OSNs) have become immensely popular as social interaction services among worldwide Internet users. OSNs facilitate Third-party applications (TPAs) which provide many additional functionalities to users. While providing the extended services TPAs access the users data which would raise serious concerns to user privacy. This is due to the lack of user data protection mechanisms by OSN and none of the present OSN platforms offers satisfactory protection mechanisms to users private data. In this paper, we propose an access control framework called InfoRest to restrict user data to TPA in OSN, considering users privacy preferences and attribute generalization. Further, we propose a relation based access control (ReBAC) policy model and use predicate calculus to represent access conditions. The usability and correctness of the proposed policy model are demonstrated with the help of a logical model developed using answer set programming. | InfoRest: Restricting Privacy Leakage to Online Social Network App | 9,694 |
Given a set $\Omega$ and a proximity function $\phi: \Omega \times \Omega \to \mathbb R^+$, we define a new metric for $\Omega$ by considering a path distance in $\Omega$, that is considered as a complete graph. We analyze the properties of such a distance, and several procedures for defining the initial proximity matrix $( \phi(a,b) )_{(a,b) \in \Omega \times \Omega}.$ Our motivation has its roots in the current interest in finding effective algorithms for detecting and classifying relations among elements of a social network. For example, the analysis of a set of companies working for a given public administration or other figures in which automatic fraud detection systems are needed. Using this formalism, we state our main idea regarding fraud detection, that is founded in the fact that fraud can be detected because it produces a meaningful local change of density in the metric space defined in this way. | Graph distances for determining entities relationships: a topological
approach to fraud detection | 9,695 |
We use data from the Facebook Advertisement Platform to study patterns of demographic disparities in usage of Facebook across countries. We address three main questions: (1) How does Facebook usage differ by age and by gender around the world? (2) How does the size of friendship networks vary by age and by gender? (3) What are the demographic characteristics of specific subgroups of Facebook users? We find that in countries in North America and northern Europe, patterns of Facebook usage differ little between older people and younger adults. In Asian countries, which have high levels of gender inequality, differences in Facebook adoption by gender disappear at older ages, possibly as a result of selectivity. We also observe that across countries, women tend to have larger networks of close friends than men, and that female users who are living away from their hometown are more likely to engage in Facebook use than their male counterparts, regardless of their region and age group. Our findings contextualize recent research on gender gaps in online usage, and offer new insights into some of the nuances of demographic differentials in the adoption and the use of digital technologies. | Demographic Differentials in Facebook Usage Around the World | 9,696 |
Online user innovation communities are becoming a promising source of user innovation knowledge and creative users. With the purpose of identifying valuable innovation knowledge and users, this study constructs an integrated super-network model, i.e., User Innovation Knowledge Super-Network (UIKSN), to integrate fragmented knowledge, knowledge fields, users and posts in an online community knowledge system. Based on the UIKSN, the core innovation knowledge, core innovation knowledge fields, core creative users, and the knowledge structure of individual users were identified specifically. The findings help capture the innovation trends of products, popular innovations and creative users, and makes contributions on mining, and integrating and analyzing innovation knowledge in community based innovation theory. | An Integrated Model for User Innovation Knowledge Based on Super-network | 9,697 |
We study the adaptive influence maximization problem with myopic feedback under the independent cascade model: one sequentially selects k nodes as seeds one by one from a social network, and each selected seed returns the immediate neighbors it activates as the feedback available for later selections, and the goal is to maximize the expected number of total activated nodes, referred as the influence spread. We show that the adaptivity gap, the ratio between the optimal adaptive influence spread and the optimal non-adaptive influence spread, is at most 4 and at least e/(e-1), and the approximation ratios with respect to the optimal adaptive influence spread of both the non-adaptive greedy and adaptive greedy algorithms are at least \frac{1}{4}(1 - \frac{1}{e}) and at most \frac{e^2 + 1}{(e + 1)^2} < 1 - \frac{1}{e}. Moreover, the approximation ratio of the non-adaptive greedy algorithm is no worse than that of the adaptive greedy algorithm, when considering all graphs. Our result confirms a long-standing open conjecture of Golovin and Krause (2011) on the constant approximation ratio of adaptive greedy with myopic feedback, and it also suggests that adaptive greedy may not bring much benefit under myopic feedback. | Adaptive Influence Maximization with Myopic Feedback | 9,698 |
Social Media are nowadays the privileged channel for information spreading and news checking. Unexpectedly for most of the users, automated accounts, also known as social bots, contribute more and more to this process of news spreading. Using Twitter as a benchmark, we consider the traffic exchanged, over one month of observation, on a specific topic, namely the migration flux from Northern Africa to Italy. We measure the significant traffic of tweets only, by implementing an entropy-based null model that discounts the activity of users and the virality of tweets. Results show that social bots play a central role in the exchange of significant content. Indeed, not only the strongest hubs have a number of bots among their followers higher than expected, but furthermore a group of them, that can be assigned to the same political tendency, share a common set of bots as followers. The retwitting activity of such automated accounts amplifies the presence on the platform of the hubs' messages. | The role of bot squads in the political propaganda on Twitter | 9,699 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.