text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
Corporate networks, as induced by interlocking directorates between corporations, provide structures of personal communication at the level of their boards. This paper studies such networks from a perspective of close communication in sub-networks, where each pair of nodes (boards of a corporation) are either neighbours, or have a common neighbour. These correspond to subgraphs of diameter at most 2, designated by us earlier as 2-clubs, with three types (coteries, social circles and hamlets) as degrees of close communication in social networks, within the concept of boroughs of a network. Boroughs are maximal areas and containers of close communication between nodes of a network. This framework is applied in this paper to an analysis of corporate board interlocks between the top 300 European corporations 2010, as studied by Heemkerk (2013), with data provided by him for that purpose. The paper gives results for several perspectives of close communication in the European corporate network of 2010, a year close to the global crash of 2008, as a further elaboration of those given in Heemskerk (2013). | Close Communication and 2-Clubs in Corporate Networks: Europe 2010 | 9,100 |
This paper studies ranking policies in a stylized trial-offer marketplace model, in which a single firm offers products and has consumers with heterogeneous preferences. Consumer trials are influenced by past purchases and the ranking of each product. The platform owner needs to devise a ranking policy to display the products to maximize the number of purchases in the long run. The model proposed attempts to understand the impact of market segmentation in a trial-offer market with social influence. In our model, consumer choices are based on a very general choice model known as the mixed MNL. We analyze the long-term dynamics of this highly complex stochastic model and we quantify the expected benefits of market segmentation. When past purchases are displayed, consumer heterogeneity makes buyers try the sub-optimal products, reducing the overall sales rate. We show that consumer heterogeneity makes the ranking problem NP-hard. We then analyze the benefits of market segmentation. We find tight bounds to the expected benefits of offering a distinct ranking to each consumer segment. Finally, we show that the market segmentation strategy always benefits from social influence when the average quality ranking is used. One of the managerial implications is that the firm is better off using an aggregate ranking policy when the variety of consumer preference is limited, but it should perform a market segmentation policy when consumers are highly heterogeneous. We also show that this result is robust to relatively small consumer classification mistakes; when these are large, an aggregate ranking is preferred. | Market Segmentation in Online Platforms | 9,101 |
Online Social Networks (OSNs) provide a venue for virtual interactions and relationships between individuals. In some communities, OSNs also facilitate arranging online meetings and relationships. FetLife, the worlds largest anonymous social network for the BDSM, fetish and kink communities, provides a unique example of an OSN that serves as an interaction space, community organizing tool, and sexual market. In this paper, we present a first look at the characteristics of European members of Fetlife, comprising 504,416 individual nodes with 1,912,196 connections. We looked at user characteristics in terms of gender, sexual orientation, and preferred role. We further examined the topological and structural properties of groups, as well as the type of interactions and relations between their members. Our results suggest there are important differences between the FetLife community and conventional OSNs. The network can be characterised by complex gender based interactions both from a sexual market and platonic viewpoint which point to a truly fascinating social network. | An exploration of fetish social networks and communities | 9,102 |
Based on the theory of hypernetwork and WeChat online social relations, the paper proposes an evolving hypernetwork model with the competitiveness and the age of nodes. In the model, nodes arrive at the system in accordance with Poisson process and are gradual aging. We analyze the model by using a Poisson process theory and a continuous technique, and give a characteristic equation of hyperdegrees. We obtain the stationary average hyperdegree distribution of the hypernetwork by the characteristic equation. The numerical simulations of the models agree with the analytical results well. It is expected that our work may give help to the study of WeChat information transmission dynamics and mobile e-commerce. | Evolving hypernetwork model based on WeChat user relations | 9,103 |
Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n=861), it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants' past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection. | Modelling influence and opinion evolution in online collective behaviour | 9,104 |
The increasing popularity of academic social networking sites (ASNSs) requires studies on the usage of ASNSs among scholars and evaluations of the effectiveness of these ASNSs. However, it is unclear whether current ASNSs have fulfilled their design goal, as scholars' actual online interactions on these platforms remain unexplored. To fill the gap, this article presents a study based on data collected from ResearchGate. Adopting a mixed-method design by conducting qualitative content analysis and statistical analysis on 1,128 posts collected from ResearchGate Q&A, we examine how scholars exchange information and resources, and how their practices vary across three distinct disciplines: library and information services, history of art, and astrophysics. Our results show that the effect of a questioner's intention (i.e., seeking information or discussion) is greater than disciplinary factors in some circumstances. Across the three disciplines, responses to questions provide various resources, including experts' contact details, citations, links to Wikipedia, images, and so on. We further discuss several implications of the understanding of scholarly information exchange and the design of better academic social networking interfaces, which should stimulate scholarly interactions by minimizing confusion, improving the clarity of questions, and promoting scholarly content management. | Information exchange on an academic social networking site: A
multidiscipline comparison on researchgate Q&A | 9,105 |
Working adults spend nearly one third of their daily time at their jobs. In this paper, we study job-related social media discourse from a community of users. We use both crowdsourcing and local expertise to train a classifier to detect job-related messages on Twitter. Additionally, we analyze the linguistic differences in a job-related corpus of tweets between individual users vs. commercial accounts. The volumes of job-related tweets from individual users indicate that people use Twitter with distinct monthly, daily, and hourly patterns. We further show that the moods associated with jobs, positive and negative, have unique diurnal rhythms. | Job-related discourse on social media | 9,106 |
The search engine based on influenza monitoring system has been widely applied in many European and American countries. However, there are not any correlative researches reported for African developing countries. Especially, the countries Egypt has not designed an influenza monitoring system on the basis of the Internet search data. This study aims at analyzing the correlation between the Google search data and the H1N1 morbidity data of Egypt, and examining the feasibility of Google Flu Model in predicting the H1N1 influenza trend. | Research of the Correlation between the H1N1 Morbidity Data and Google
Trends in Egypt | 9,107 |
User behaviour analysis based on traffic log in wireless networks can be beneficial to many fields in real life: not only for commercial purposes, but also for improving network service quality and social management. We cluster users into groups marked by the most frequently visited websites to find their preferences. In this paper, we propose a user behaviour model based on Topic Model from document classification problems. We use the logarithmic TF-IDF (term frequency - inverse document frequency) weighing to form a high-dimensional sparse feature matrix. Then we apply LSA (Latent semantic analysis) to deduce the latent topic distribution and generate a low-dimensional dense feature matrix. K-means++, which is a classic clustering algorithm, is then applied to the dense feature matrix and several interpretable user clusters are found. Moreover, by combining the clustering results with additional demographical information, including age, gender, and financial information, we are able to uncover more realistic implications from the clustering results. | Topic Model Based Behaviour Modeling and Clustering Analysis for
Wireless Network Users | 9,108 |
Nearest neighbor search is a basic computational tool used extensively in almost research domains of computer science specially when dealing with large amount of data. However, the use of nearest neighbor search is restricted for the purpose of algorithmic development by the existence of the notion of nearness among the data points. The recent trend of research is on large, complex networks and their structural analysis, where nodes represent entities and edges represent any kind of relation between entities. Community detection in complex network is an important problem of much interest. In general, a community detection algorithm represents an objective function and captures the communities by optimizing it to extract the interesting communities for the user. In this article, we have studied the nearest neighbor search problem in complex network via the development of a suitable notion of nearness. Initially, we have studied and analyzed the exact nearest neighbor search using metric tree on proposed metric space constructed from complex network. After, the approximate nearest neighbor search problem is studied using locality sensitive hashing. For evaluation of the proposed nearest neighbor search on complex network we applied it in community detection problem. The results obtained using our methods are very competitive with most of the well known algorithms exists in the literature and this is verified on collection of real networks. On the other-hand, it can be observed that time taken by our algorithm is quite less compared to popular methods. | Nearest Neighbor search in Complex Network for Community Detection | 9,109 |
As breaking news unfolds people increasingly rely on social media to stay abreast of the latest updates. The use of social media in such situations comes with the caveat that new information being released piecemeal may encourage rumours, many of which remain unverified long after their point of release. Little is known, however, about the dynamics of the life cycle of a social media rumour. In this paper we present a methodology that has enabled us to collect, identify and annotate a dataset of 330 rumour threads (4,842 tweets) associated with 9 newsworthy events. We analyse this dataset to understand how users spread, support, or deny rumours that are later proven true or false, by distinguishing two levels of status in a rumour life cycle i.e., before and after its veracity status is resolved. The identification of rumours associated with each event, as well as the tweet that resolved each rumour as true or false, was performed by a team of journalists who tracked the events in real time. Our study shows that rumours that are ultimately proven true tend to be resolved faster than those that turn out to be false. Whilst one can readily see users denying rumours once they have been debunked, users appear to be less capable of distinguishing true from false rumours when their veracity remains in question. In fact, we show that the prevalent tendency for users is to support every unverified rumour. We also analyse the role of different types of users, finding that highly reputable users such as news organisations endeavour to post well-grounded statements, which appear to be certain and accompanied by evidence. Nevertheless, these often prove to be unverified pieces of information that give rise to false rumours. Our study reinforces the need for developing robust machine learning techniques that can provide assistance for assessing the veracity of rumours. | Analysing How People Orient to and Spread Rumours in Social Media by
Looking at Conversational Threads | 9,110 |
Centrality measures have been defined to quantify the importance of a node in complex networks. The relative importance of a node can be measured using its centrality rank based on the centrality value. In the present work, we predict the degree centrality rank of a node without having the entire network. The proposed method uses degree of the node and some network parameters to predict its rank. These network parameters include network size, minimum, maximum, and average degree of the network. These parameters are estimated using random walk sampling techniques. The proposed method is validated on Barabasi-Albert networks. Simulation results show that the proposed method predicts the rank of higher degree nodes with more accuracy. The average error in the rank prediction is approximately $0.16\%$ of the network size. | Rank me thou shalln't Compare me | 9,111 |
The automatic content analysis of mass media in the social sciences has become necessary and possible with the raise of social media and computational power. One particularly promising avenue of research concerns the use of opinion mining. We design and implement the POPmine system which is able to collect texts from web-based conventional media (news items in mainstream media sites) and social media (blogs and Twitter) and to process those texts, recognizing topics and political actors, analyzing relevant linguistic units, and generating indicators of both frequency of mention and polarity (positivity/negativity) of mentions to political actors across sources, types of sources, and across time. | POPmine: Tracking Political Opinion on the Web | 9,112 |
This paper complements the large body of social sensing literature by developing means for augmenting sensing data with inference results that "fill-in" missing pieces. It specifically explores the synergy between (i) inference techniques used for filling-in missing pieces and (ii) source selection techniques used to determine which pieces to retrieve in order to improve inference results. We focus on prediction in disaster scenarios, where disruptive trend changes occur. We first discuss our previous conference study that compared a set of prediction heuristics and developed a hybrid prediction algorithm. We then enhance the prediction scheme by considering algorithms for sensor selection that improve inference quality. Our proposed source selection and extrapolation algorithms are tested using data collected during the New York City crisis in the aftermath of Hurricane Sandy in November 2012. The evaluation results show that consistently good predictions are achieved. The work is notable for addressing the bi-modal nature of damage propagation in complex systems subjected to stress, where periods of calm are interspersed with periods of severe change. It is novel in offering a new solution to the problem that jointly leverages source selection and extrapolation components thereby improving the results. | Joint Source Selection and Data Extrapolation in Social Sensing for
Disaster Response | 9,113 |
Social media platforms are popular venues for fashion brand marketing and advertising. With the introduction of native advertising, users don't have to endure banner ads that hold very little saliency and are unattractive. Using images and subtle text overlays, even in a world of ever-depreciating attention span, brands can retain their audience and have a capacious creative potential. While an assortment of marketing strategies are conjectured, the subtle distinctions between various types of marketing strategies remain under-explored. This paper presents a qualitative analysis on the influence of social media platforms on different behaviors of fashion brand marketing. We employ both linguistic and computer vision techniques while comparing and contrasting strategic idiosyncrasies. We also analyze brand audience retention and social engagement hence providing suggestions in adapting advertising and marketing strategies over Twitter and Instagram. | Trending Chic: Analyzing the Influence of Social Media on Fashion Brands | 9,114 |
In this paper, we propose a method of ranking recently created Twitter accounts according to their prospective popularity. Early detection of new promising accounts is useful for trend prediction, viral marketing, user recommendation, and so on. New accounts are, however, difficult to evaluate because they have not established their reputations, and we cannot apply existing link-based or other popularity-based account evaluation methods. Our method first finds "early adopters", i.e., users who often find new good information sources earlier than others. Our method then regards new accounts followed by good early adopters as promising, even if they do not have many followers now. In order to find good early adopters, we estimate the frequency of link propagation from each account, i.e., how many times the follow links from the account have been copied by its followers. If its followers have copied many of its follow links in the past, the account must be an early adopter, who find good information sources earlier than its followers. We develop a method of inferring which links are created by copying which links. One advantage of our method is that our method only uses information that can be easily obtained only by crawling neighbors of the target accounts in the current Twitter graph. We evaluated our method by an experiment on Twitter data. We chose then-new accounts from an old snapshot of Twitter, compute their ranking by our method, and compare it with the number of followers the accounts currently have. The result shows that our method produces better rankings than various baseline methods, especially for new accounts that have only a few followers. | Predicting Popularity of Twitter Accounts through the Discovery of
Link-Propagating Early Adopters | 9,115 |
Social media has emerged to be a popular platform for people to express their viewpoints on political protests like the Arab Spring. Millions of people use social media to communicate and mobilize their viewpoints on protests. Hence, it is a valuable tool for organizing social movements. However, the mechanisms by which protest affects the population is not known, making it difficult to estimate the number of protestors. In this paper, we are inspired by sociological theories of protest participation and propose a framework to predict from the user's past status messages and interactions whether the next post of the user will be a declaration of protest. Drawing concepts from these theories, we model the interplay between the user's status messages and messages interacting with him over time and predict whether the next post of the user will be a declaration of protest. We evaluate the framework using data from the social media platform Twitter on protests during the recent Nigerian elections and demonstrate that it can effectively predict whether the next post of a user is a declaration of protest. | Predicting Online Protest Participation of Social Media Users | 9,116 |
The cumulative effect of collective online participation has an important and adverse impact on individual privacy. As an online system evolves over time, new digital traces of individual behavior may uncover previously hidden statistical links between an individual's past actions and her private traits. To quantify this effect, we analyze the evolution of individual privacy loss by studying the edit history of Wikipedia over 13 years, including more than 117,523 different users performing 188,805,088 edits. We trace each Wikipedia's contributor using apparently harmless features, such as the number of edits performed on predefined broad categories in a given time period (e.g. Mathematics, Culture or Nature). We show that even at this unspecific level of behavior description, it is possible to use off-the-shelf machine learning algorithms to uncover usually undisclosed personal traits, such as gender, religion or education. We provide empirical evidence that the prediction accuracy for almost all private traits consistently improves over time. Surprisingly, the prediction performance for users who stopped editing after a given time still improves. The activities performed by new users seem to have contributed more to this effect than additional activities from existing (but still active) users. Insights from this work should help users, system designers, and policy makers understand and make long-term design choices in online content creation systems. | Evolution of Privacy Loss in Wikipedia | 9,117 |
Data mining can be evaluated as a strategic tool to determine the customer profiles in order to learn customer expectations and requirements. Airline customers have different characteristics and if passenger reviews about their trip experiences are correctly analyzed, companies can increase customer satisfaction by improving provided services. In this study, we investigate customer review data for in-flight services of airline companies and draw customer models with respect to such data. In this sense, we apply two approaches as feature-based and clustering-based modelling. In feature-based modelling, customers are grouped into categories based on features such as cabin flown types, experienced airline companies. In clustering-based modelling, customers are first clustered via k-means clustering and then modeled. We apply multivariate regression analysis to model customer groups in both cases. During this, we try to understand how customers evaluate the given services and what dominant characteristics of in-flight services can be from the customer viewpoint. | Understanding Customers' Evaluations Through Mining Airline Reviews | 9,118 |
We consider stochastic influence maximization problems arising in social networks. In contrast to existing studies that involve greedy approximation algorithms with a 63% performance guarantee, our work focuses on solving the problem optimally. To this end, we introduce a new class of problems that we refer to as two-stage stochastic submodular optimization models. We propose a delayed constraint generation algorithm to find the optimal solution to this class of problems with a finite number of samples. The influence maximization problems of interest are special cases of this general problem class. We show that the submodularity of the influence function can be exploited to develop strong optimality cuts that are more effective than the standard optimality cuts available in the literature. Finally, we report our computational experiments with large-scale real-world datasets for two fundamental influence maximization problems, independent cascade and linear threshold, and show that our proposed algorithm outperforms the greedy algorithm. | Maximizing Influence in Social Networks: A Two-Stage Stochastic
Programming Approach That Exploits Submodularity | 9,119 |
The Paris attacks prompted a massive response on social media including Twitter. This paper explores the immediate response of English speakers on Twitter towards Middle Eastern refugees in Europe. We show that antagonism towards refugees is mostly coming from the United States and is mostly partisan. | Attitudes towards Refugees in Light of the Paris Attacks | 9,120 |
The Paris terrorist attacks occurred on November 13, 2015 prompted a massive response on social media including Twitter, with millions of posted tweets in the first few hours after the attacks. Most of the tweets were condemning the attacks and showing support to Parisians. One of the trending debates related to the attacks concerned possible association between terrorism and Islam and Muslims in general. This created a global discussion between those attacking and those defending Islam and Muslims. In this paper, we provide quantitative and qualitative analysis of data collection we streamed from Twitter starting 7 hours after the Paris attacks and for 50 subsequent hours that are related to blaming Islam and Muslims and to defending them. We collected a set of 8.36 million tweets in this epoch consisting of tweets in many different of languages. We could identify a subset consisting of 900K tweets relating to Islam and Muslims. Using sampling methods and crowd-sourcing annotation, we managed to estimate the public response of these tweets. Our findings show that the majority of the tweets were in fact defending Muslims and absolving them from responsibility for the attacks. However, a considerable number of tweets were blaming Muslims, with most of these tweets coming from western countries such as the Netherlands, France, and the US. | Quantifying Public Response towards Islam on Twitter after Paris Attacks | 9,121 |
A link stream is a collection of triplets $(t,u,v)$ indicating that an interaction occurred between $u$ and $v$ at time $t$. Link streams model many real-world situations like email exchanges between individuals, connections between devices, and others. Much work is currently devoted to the generalization of classical graph and network concepts to link streams. In this paper, we generalize the existing notions of intra-community density and inter-community density. We focus on emails exchanges in the Debian mailing-list, and show that threads of emails, like communities in graphs, are dense subsets loosely connected from a link stream perspective. | Analysis of the temporal and structural features of threads in a
mailing-list | 9,122 |
How can web services that depend on user generated content discern fake social engagement activities by spammers from legitimate ones? In this paper, we focus on the social site of YouTube and the problem of identifying bad actors posting inorganic contents and inflating the count of social engagement metrics. We propose an effective method, Leas (Local Expansion at Scale), and show how the fake engagement activities on YouTube can be tracked over time by analyzing the temporal graph based on the engagement behavior pattern between users and YouTube videos. With the domain knowledge of spammer seeds, we formulate and tackle the problem in a semi-supervised manner --- with the objective of searching for individuals that have similar pattern of behavior as the known seeds --- based on a graph diffusion process via local spectral subspace. We offer a fast, scalable MapReduce deployment adapted from the localized spectral clustering algorithm. We demonstrate the effectiveness of our deployment at Google by achieving an manual review accuracy of 98% on YouTube Comments graph in practice. Comparing with the state-of-the-art algorithm CopyCatch, Leas achieves 10 times faster running time. Leas is actively in use at Google, searching for daily deceptive practices on YouTube's engagement graph spanning over a billion users. | In a World That Counts: Clustering and Detecting Fake Social Engagement
at Scale | 9,123 |
Among the topics discussed on social media, some spark more heated debate than others. For example, experience suggests that major political events, such as a vote for healthcare law in the US, would spark more debate between opposing sides than other events, such as a concert of a popular music band. Exploring the topics of discussion on Twitter and understanding which ones are controversial is extremely useful for a variety of purposes, such as for journalists to understand what issues divide the public, or for social scientists to understand how controversy is manifested in social interactions. The system we present processes the daily trending topics discussed on the platform, and assigns to each topic a controversy score, which is computed based on the interactions among Twitter users, and a visualization of these interactions, which provides an intuitive visual cue regarding the controversy of the topic. The system also allows users to explore the messages (tweets) associated with each topic, and sort and explore the topics by different criteria (e.g., by controversy score, time, or related keywords). | Exploring Controversy in Twitter | 9,124 |
The debates on minority issues are often dominated by or held among the concerned minority: gender equality debates have often failed to engage men, while those about race fail to effectively engage the dominant group. To test this observation, we study the #BlackLivesMatter}movement and hashtag on Twitter--which has emerged and gained traction after a series of events typically involving the death of African-Americans as a result of police brutality--and aim to quantify the population biases across user types (individuals vs. organizations), and (for individuals) across various demographics factors (race, gender and age). Our results suggest that more African-Americans engage with the hashtag, and that they are also more active than other demographic groups. We also discuss ethical caveats with broader implications for studies on sensitive topics (e.g. discrimination, mental health, or religion) that focus on users. | Characterizing the Demographics Behind the #BlackLivesMatter Movement | 9,125 |
Link recommendation, which suggests links to connect currently unlinked users, is a key functionality offered by major online social networks. Salient examples of link recommendation include "People You May Know" on Facebook and LinkedIn as well as "You May Know" on Google+. The main stakeholders of an online social network include users (e.g., Facebook users) who use the network to socialize with other users and an operator (e.g., Facebook Inc.) that establishes and operates the network for its own benefit (e.g., revenue). Existing link recommendation methods recommend links that are likely to be established by users but overlook the benefit a recommended link could bring to an operator. To address this gap, we define the utility of recommending a link and formulate a new research problem - the utility-based link recommendation problem. We then propose a novel utility-based link recommendation method that recommends links based on the value, cost, and linkage likelihood of a link, in contrast to existing link recommendation methods which focus solely on linkage likelihood. Specifically, our method models the dependency relationship between value, cost, linkage likelihood and utility-based link recommendation decision using a Bayesian network, predicts the probability of recommending a link with the Bayesian network, and recommends links with the highest probabilities. Using data obtained from a major U.S. online social network, we demonstrate significant performance improvement achieved by our method compared to prevalent link recommendation methods from representative prior research. | Utility-based Link Recommendation for Online Social Networks | 9,126 |
Recent studies on human mobility show that human movements are not random and tend to be clustered. In this connection, the movements of Twitter users captured by geo-located tweets were found to follow similar patterns, where a few geographic locations dominate the tweeting activity of individual users. However, little is known about the semantics (landuse types) and temporal tweeting behavior at those frequently-visited locations. Furthermore, it is generally assumed that the top two visited locations for most of the users are home and work locales (Hypothesis A) and people tend to tweet at their top locations during a particular time of the day (Hypothesis B). In this paper, we tested these two frequently cited hypotheses by examining the tweeting patterns of more than 164,000 unique Twitter users whom were residents of the city of Chicago during 2014. We extracted landuse attributes for each geo-located tweet from the detailed inventory of the Chicago Metropolitan Agency for Planning. Top-visited locations were identified by clustering semantic enriched tweets using a DBSCAN algorithm. Our results showed that although the top two locations are likely to be residential and occupational/educational, a portion of the users deviated from this case, suggesting that the first hypothesis oversimplify real-world situations. However, our observations indicated that people tweet at specific times and these temporal signatures are dependent on landuse types. We further discuss the implication of confounding variables, such as clustering algorithm parameters and relative accuracy of tweet coordinates, which are critical factors in any experimental design involving Twitter data. | Where Chicagoans tweet the most: Semantic analysis of preferential
return locations of Twitter users | 9,127 |
Yelp is one of the largest online searching and reviewing systems for kinds of businesses, including restaurants, shopping, home services et al. Analyzing the real world data from Yelp is valuable in acquiring the interests of users, which helps to improve the design of the next generation system. This paper targets the evaluation of Yelp dataset, which is provided in the Yelp data challenge. A bunch of interesting results are found. For instance, to reach any one in the Yelp social network, one only needs 4.5 hops on average, which verifies the classical six degree separation theory; Elite user mechanism is especially effective in maintaining the healthy of the whole network; Users who write less than 100 business reviews dominate. Those insights are expected to be considered by Yelp to make intelligent business decisions in the future. | An Evaluation of Yelp Dataset | 9,128 |
This paper considers trial-offer markets where consumer preferences are modeled by a multinomial logit with social influence and position bias. The social signal for a product is given by its current market share raised to power r (or equivalently the number of purchases raised to the power of r). The paper shows that, when r is strictly between 0 and 1, and a static position assignment (e.g., a quality ranking) is used, the market converges to a unique equilibrium where the market shares depend only on product quality, not their initial appeals or the early dynamics. When r is greater than 1, the market becomes unpredictable. In many cases, the market goes to a monopoly for some product: Which product becomes a monopoly depends on the initial conditions of the market. These theoretical results are complemented by an agent-based simulation which indicates that convergence is fast when r is between 0 and 1, and that the quality ranking dominates the well-known popularity ranking in terms of market efficiency. These results shed a new light on the role of social influence which is often blamed for unpredictability, inequalities, and inefficiencies in markets. In contrast, this paper shows that, with a proper social signal and position assignment for the products, the market becomes predictable, and inequalities and inefficiencies can be controlled appropriately. | Popularity Signals in Trial-Offer Markets with Social Influence and
Position Bias | 9,129 |
Good websites should be easy to navigate via hyperlinks, yet maintaining a high-quality link structure is difficult. Identifying pairs of pages that should be linked may be hard for human editors, especially if the site is large and changes frequently. Further, given a set of useful link candidates, the task of incorporating them into the site can be expensive, since it typically involves humans editing pages. In the light of these challenges, it is desirable to develop data-driven methods for automating the link placement task. Here we develop an approach for automatically finding useful hyperlinks to add to a website. We show that passively collected server logs, beyond telling us which existing links are useful, also contain implicit signals indicating which nonexistent links would be useful if they were to be introduced. We leverage these signals to model the future usefulness of yet nonexistent links. Based on our model, we define the problem of link placement under budget constraints and propose an efficient algorithm for solving it. We demonstrate the effectiveness of our approach by evaluating it on Wikipedia, a large website for which we have access to both server logs (used for finding useful new links) and the complete revision history (containing a ground truth of new links). As our method is based exclusively on standard server logs, it may also be applied to any other website, as we show with the example of the biomedical research site Simtk. | Improving Website Hyperlink Structure Using Server Logs | 9,130 |
Social instant messaging services are emerging as a transformative form with which people connect, communicate with friends in their daily life - they catalyze the formation of social groups, and they bring people stronger sense of community and connection. However, research community still knows little about the formation and evolution of groups in the context of social messaging - their lifecycles, the change in their underlying structures over time, and the diffusion processes by which they develop new members. In this paper, we analyze the daily usage logs from WeChat group messaging platform - the largest standalone messaging communication service in China - with the goal of understanding the processes by which social messaging groups come together, grow new members, and evolve over time. Specifically, we discover a strong dichotomy among groups in terms of their lifecycle, and develop a separability model by taking into account a broad range of group-level features, showing that long-term and short-term groups are inherently distinct. We also found that the lifecycle of messaging groups is largely dependent on their social roles and functions in users' daily social experiences and specific purposes. Given the strong separability between the long-term and short-term groups, we further address the problem concerning the early prediction of successful communities. In addition to modeling the growth and evolution from group-level perspective, we investigate the individual-level attributes of group members and study the diffusion process by which groups gain new members. By considering members' historical engagement behavior as well as the local social network structure that they embedded in, we develop a membership cascade model and demonstrate the effectiveness by achieving AUC of 95.31% in predicting inviter, and an AUC of 98.66% in predicting invitee. | The Lifecycle and Cascade of WeChat Social Messaging Groups | 9,131 |
In this paper we predict outgoing mobile phone calls using a machine learning approach. We analyze to which extent the activity of mobile phone users is predictable. The premise is that mobile phone users exhibit temporal regularity in their interactions with majority of their contacts. In the sociological context, most social interactions have fairly reliable temporal regularity. If we quantify the extension of this behavior to interactions on mobile phones we expect that caller-callee interaction is not merely a result of randomness, rather it exhibits a temporal pattern. To this end, we tested our approach on an anonymized mobile phone usage dataset collected specifically for analyzing temporal patterns in mobile phone communication. The data consists of 783 users and more than 12,000 caller-callee pairs. The results show that users' historic calling patterns can predict future calls with reasonable accuracy. | On Temporal Regularity in Social Interactions: Predicting Mobile Phone
Calls | 9,132 |
Group centrality is an extension of the classical notion of centrality for individuals, to make it applicable to sets of them. We perform a SWOT (strengths, weaknesses, opportunities and threats) analysis of the use of group centrality in semantic networks, for different centrality notions: degree, closeness, betweenness, giving prominence to random walks. Among our main results stand out the relevance and NP-hardness of the problem of finding the most central set in a semantic network for an specific centrality measure. | Group Centrality for Semantic Networks: a SWOT analysis featuring Random
Walks | 9,133 |
This paper considers the problem of refreshing a dataset. More precisely , given a collection of nodes gathered at some time (Web pages, users from an online social network) along with some structure (hyperlinks, social relationships), we want to identify a significant fraction of the nodes that still exist at present time. The liveness of an old node can be tested through an online query at present time. We call LiveRank a ranking of the old pages so that active nodes are more likely to appear first. The quality of a LiveRank is measured by the number of queries necessary to identify a given fraction of the active nodes when using the LiveRank order. We study different scenarios from a static setting where the Liv-eRank is computed before any query is made, to dynamic settings where the LiveRank can be updated as queries are processed. Our results show that building on the PageRank can lead to efficient LiveRanks, for Web graphs as well as for online social networks. | LiveRank: How to Refresh Old Datasets | 9,134 |
Drinking water for human health and well-being is crucial. Accidental and intentional water contamination can pose great danger to consumers. Optimal design of a system that can quickly detect the presence of contamination in a water distribution network is very challenging for technical and operational reasons. However, on the one hand improvement in chemical and biological sensor technology has created the possibility of designing efficient contamination detection systems. On the other hand, methods and tools from complex network theory, which was primarily the domain of mathematicians and physicists, provide analytical output for engineers to design, optimize, operate, and maintain complex network systems such as power grids, water distribution networks, telecommunication systems, internet, roads, supply chains, traffic and transportation systems. In this work, we develop a new modeling approach for the optimal placement of sensors for contamination detection in a water distribution network. The approach originally combines classical optimization and complex systems theory. | A complex network theory approach for optimizing contamination warning
sensor location in water distribution networks | 9,135 |
The number of people who decide to share their photographs publicly increases every day, consequently making available new almost real-time insights of human behavior while traveling. Rather than having this statistic once a month or yearly, urban planners and touristic workers now can make decisions almost simultaneously with the emergence of new events. Moreover, these datasets can be used not only to compare how popular different touristic places are, but also predict how popular they should be taking into an account their characteristics. In this paper we investigate how country attractiveness scales with its population and size using number of foreign users taking photographs, which is observed from Flickr dataset, as a proxy for attractiveness. The results showed two things: to a certain extent country attractiveness scales with population, but does not with its size; and unlike in case of Spanish cities, country attractiveness scales sublinearly with population, and not superlinearly. | Sublinear scaling of country attractiveness observed from Flickr dataset | 9,136 |
Community networks differ from regular networks by their organic growth patterns -- there is no central planning body that would decide how the network is built. Instead, the network grows in a bottom-up fashion as more people express interest in participating in the community and connect with their neighbours. People who participate in community networks are usually volunteers with limited free time. Due to these factors, making the management of community networks simpler and easier for all participants is the key component in boosting their growth. Specifics of individual networks often force communities to develop their own sets of tools and best practices which are hard to share and do not interoperate well with others. We propose a new general community network management platform nodewatcher that is built around the core principle of modularity and extensibility, making it suitable for reuse by different community networks. Devices are configured using platform-independent configuration which nodewatcher can transform into deployable firmware images, eliminating any manual device configuration, reducing errors, and enabling participation of novice maintainers. An embedded monitoring system enables live overview and validation of the whole community network. We show how the system successfully operates in an actual community wireless network, wlan Slovenija. | nodewatcher: A Substrate for Growing Your own Community Network | 9,137 |
In this article we present the Hayastan Shakarian (HS), a robustness index for complex networks. HS measures the impact of a network disconnection (edge) while comparing the sizes of the remaining connected components. Strictly speaking, the Hayastan Shakarian index is defined as edge removal that produces the maximal inverse of the size of the largest connected component divided by the sum of the sizes of the remaining ones. We tested our index in attack strategies where the nodes are disconnected in decreasing order of a specified metric. We considered using the Hayastan Shakarian cut (disconnecting the edge with max HS) and other well-known strategies as the higher betweenness centrality disconnection. All strategies were compared regarding the behavior of the robustness (R-index) during the attacks. In an attempt to simulate the internet backbone, the attacks were performed in complex networks with power-law degree distributions (scale-free networks). Preliminary results show that attacks based on disconnecting using the Hayastan Shakarian cut are more dangerous (decreasing the robustness) than the same attacks based on other centrality measures. We believe that the Hayastan Shakarian cut, as well as other measures based on the size of the largest connected component, provides a good addition to other robustness metrics for complex networks. | The Hayastan Shakarian cut: measuring the impact of network
disconnections | 9,138 |
In this paper, a method of modeling the dynamics of electronic discussions is proposed based on the so called FOPDT model (First Order Plus Dead Time) known from the process control. Knowledge of the model points to possibility of estimating dynamic movements in discussions as well as understanding and designing their maintaining and guidance. Real discussions are processed. | Modeling the Dynamics of Discussions in Social Networks | 9,139 |
Twitter as a micro-blogging platform rose to instant fame mainly due to its minimalist features that allow seamless communication between users. As the conversations grew thick and faster, a placeholder feature called as Hashtags became important as it captured the themes behind the tweets. Prior studies have investigated the conversation dynamics, inter-play with other media platforms and communication patterns between users for specific event-based hashtags such as the #Occupy movement. Commonplace hashtags which are used on a daily basis have been largely ignored due to their seemingly innocuous presence in tweets and also due to the lack of connection with real-world events. However, it can be postulated that utility of these hashtags is the main reason behind their continued usage. This study is aimed at understanding the rationale behind the usage of a particular type of commonplace hashtags - location hashtags such as country and city name hashtags. Tweets with the hashtag #singapore were extracted for one week duration. Manual and automatic tweet classification was performed along with social network analysis, to identify the underlying themes. Seven themes were identified. Findings indicate that the hashtag is prominent in tweets about local events, local news, users current location and landmark related information sharing. Users who share content from social media sites such as Instagram make use of the hashtag in a more prominent way when compared to users who post textual content. News agencies, commercial bodies and celebrities make use of the hashtag more than common individuals. Overall, the results show the non-conversational nature of the hashtag. The findings are to be validated with other country names and cross-validated with hashtag data from other social media platforms. | Whats in a Country Name - Twitter Hashtag Analysis of #singapore | 9,140 |
We present a simple spectral approach to the well-studied constrained clustering problem. It captures constrained clustering as a generalized eigenvalue problem with graph Laplacians. The algorithm works in nearly-linear time and provides concrete guarantees for the quality of the clusters, at least for the case of 2-way partitioning. In practice this translates to a very fast implementation that consistently outperforms existing spectral approaches both in speed and quality. | Scalable Constrained Clustering: A Generalized Spectral Method | 9,141 |
Contributing to the writing of history has never been as easy as it is today thanks to Wikipedia, a community-created encyclopedia that aims to document the world's knowledge from a neutral point of view. Though everyone can participate it is well known that the editor community has a narrow diversity, with a majority of white male editors. While this participatory \emph{gender gap} has been studied extensively in the literature, this work sets out to \emph{assess potential gender inequalities in Wikipedia articles} along different dimensions: notability, topical focus, linguistic bias, structural properties, and meta-data presentation. We find that (i) women in Wikipedia are more notable than men, which we interpret as the outcome of a subtle glass ceiling effect; (ii) family-, gender-, and relationship-related topics are more present in biographies about women; (iii) linguistic bias manifests in Wikipedia since abstract terms tend to be used to describe positive aspects in the biographies of men and negative aspects in the biographies of women; and (iv) there are structural differences in terms of meta-data and hyperlinks, which have consequences for information-seeking activities. While some differences are expected, due to historical and social contexts, other differences are attributable to Wikipedia editors. The implications of such differences are discussed having Wikipedia contribution policies in mind. We hope that the present work will contribute to increased awareness about, first, gender issues in the content of Wikipedia, and second, the different levels on which gender biases can manifest on the Web. | Women Through the Glass Ceiling: Gender Asymmetries in Wikipedia | 9,142 |
Collaborative ranking is an emerging field of recommender systems that utilizes users' preference data rather than rating values. Unfortunately, neighbor-based collaborative ranking has gained little attention despite its more flexibility and justifiability. This paper proposes a novel framework, called SibRank that seeks to improve the state of the art neighbor-based collaborative ranking methods. SibRank represents users' preferences as a signed bipartite network, and finds similar users, through a novel personalized ranking algorithm in signed networks. | SibRank: Signed Bipartite Network Analysis for Neighbor-based
Collaborative Ranking | 9,143 |
Social media can be viewed as a social system where the currency is attention. People post content and interact with others to attract attention and gain new followers. In this paper, we examine the distribution of attention across a large sample of users of a popular social media site Twitter. Through empirical analysis of these data we conclude that attention is very unequally distributed: the top 20\% of Twitter users own more than 96\% of all followers, 93\% of the retweets, and 93\% of the mentions. We investigate the mechanisms that lead to attention inequality and find that it results from the "rich-get-richer" and "poor-get-poorer" dynamics of attention diffusion. Namely, users who are "rich" in attention, because they are often mentioned and retweeted, are more likely to gain new followers, while those who are "poor" in attention are likely to lose followers. We develop a phenomenological model that quantifies attention diffusion and network dynamics, and solve it to study how attention inequality grows over time in a dynamic environment of social media. | Attention Inequality in Social Media | 9,144 |
We inhabit a world that is not only small but supports efficient decentralized search - an individual using local information can establish a line of communication with another completely unknown individual. Here we augment a hierarchical social network model with communication between and within communities. We argue that organization into communities would decrease overall decentralized search times. We take inspiration from the biological immune system which organizes search for pathogens in a hybrid modular strategy. Our strategy has relevance in search for rare amounts of information in online social networks. Our work also has implications for design of efficient online networks that could have an impact on networks of human collaboration, scientific collaboration and networks used in targeted manhunts. Real world systems, like online social networks, have high associated delays for long-distance links, since they are built on top of physical networks. Such systems have been shown to densify. Hence such networks will have a communication cost due to space and the requirement of maintaining connections. We have incorporated such a non-spatial cost to communication. We introduce the notion of a community size that increases with the size of the system, which is shown to reduce the time to search for information in networks. Our final strategy balances search times and participation costs and is shown to decrease time to find information in decentralized search in online social networks. Our strategy also balances strong-ties and weak-ties over long distances and may ultimately lead to more productive and innovative networks of human communication and enterprise. We hope that this work will lay the foundation for strategies aimed at producing global scale human interaction networks that are sustainable and lead to a more networked, diverse and prosperous society. | A Biologically Inspired Model of Distributed Online Communication
Supporting Efficient Search and Diffusion of Innovation | 9,145 |
The emergence of new communication media such as blogs, online newspapers and social networks allow us to go further in the understanding of human behavior. Indeed, these public exchange spaces are now firmly planted in our modern society and appear to be powerful sensors of social behavior and opinion movements. In this paper, we focus on information spreading and attempt to understand what are the conditions in which a person decides to speak on a subject. For this purpose, we propose a set of measures that aim to characterize the diffusion behavior. Our measures have been used on messages related to two events that took place in January 2015: presentation by Microsoft of a new virtual reality headset and the election of a political party of radical left in Greece. | Comment Diffusons-nous sur les Réseaux Sociaux ? | 9,146 |
Email classification and prioritization expert systems have the potential to automatically group emails and users as communities based on their communication patterns, which is one of the most tedious tasks. The exchange of emails among users along with the time and content information determine the pattern of communication. The intelligent systems extract these patterns from an email corpus of single or all users and are limited to statistical analysis. However, the email information revealed in those methods is either constricted or widespread, i.e. single or all users respectively, which limits the usability of the resultant communities. In contrast to extreme views of the email information, we relax the aforementioned restrictions by considering a subset of all users as multi-user information in an incremental way to extend the personalization concept. Accordingly, we propose a multi-user personalized email community detection method to discover the groupings of email users based on their structural and semantic intimacy. We construct a social graph using multi-user personalized emails. Subsequently, the social graph is uniquely leveraged with expedient attributes, such as semantics, to identify user communities through collaborative similarity measure. The multi-user personalized communities, which are evaluated through different quality measures, enable the email systems to filter spam or malicious emails and suggest contacts while composing emails. The experimental results over two randomly selected users from email network, as constrained information, unveil partial interaction among 80% email users with 14% search space reduction where we notice 25% improvement in the clustering coefficient. | A Multi-User Perspective for Personalized Email Communities | 9,147 |
While most online social media accounts are controlled by humans, these platforms also host automated agents called social bots or sybil accounts. Recent literature reported on cases of social bots imitating humans to manipulate discussions, alter the popularity of users, pollute content and spread misinformation, and even perform terrorist propaganda and recruitment actions. Here we present BotOrNot, a publicly-available service that leverages more than one thousand features to evaluate the extent to which a Twitter account exhibits similarity to the known characteristics of social bots. Since its release in May 2014, BotOrNot has served over one million requests via our website and APIs. | BotOrNot: A System to Evaluate Social Bots | 9,148 |
Hosting platforms for software projects can form collaborative social networks and a prime example of this is GitHub which is arguably the most popular platform of this kind. An open source project recommendation system could be a major feature for a platform like GitHub, enabling its users to find relevant projects in a fast and simple manner. We perform network analysis on a constructed graph based on GitHub data and present a recommendation system that uses link prediction. | GitHub open source project recommendation system | 9,149 |
The majority of influence maximization (IM) studies focus on targeting influential seeders to trigger substantial information spread in social networks. In this paper, we consider a new and complementary problem of how to further increase the influence spread of given seeders. Our study is motivated by the observation that direct incentives could "boost" users so that they are more likely to be influenced by friends. We study the $k$-boosting problem which aims to find $k$ users to boost so that the final "boosted" influence spread is maximized. The $k$-boosting problem is different from the IM problem because boosted users behave differently from seeders: boosted users are initially uninfluenced and we only increase their probability to be influenced. Our work also complements the IM studies because we focus on triggering larger influence spread on the basis of given seeders. Both the NP-hardness of the problem and the non-submodularity of the objective function pose challenges to the $k$-boosting problem. To tackle the problem on general graphs, we devise two efficient algorithms with the data-dependent approximation ratio. For the $k$-boosting problem on bidirected trees, we present an efficient greedy algorithm and a rounded dynamic programming that is a fully polynomial-time approximation scheme. We conduct extensive experiments using real social networks and synthetic bidirected trees. We show that boosting solutions returned by our algorithms achieves boosts of influence that are up to several times higher than those achieved by boosting solutions returned by intuitive baselines, which have no guarantee of solution quality. We also explore the "budget allocation" problem in our experiments. Compared with targeting seeders with all budget, larger influence spread is achieved when we allocation the budget to both seeders and boosted users. | Boosting Information Spread: An Algorithmic Approach | 9,150 |
We consider all players and clubs in top twenty world football leagues in the last fifteen seasons. The purpose of this paper is to reveal top football players and identify springboard clubs. To do that, we construct two separate weighted networks. Player collaboration network consists of players, that are connected to each other if they ever played together at the same club. In directed club transfer network, clubs are connected if players were ever transferred from one club to another. To get meaningful results, we perform different network analysis methods on our networks. Our approach based on PageRank reveals Christiano Ronaldo as the top player. Using a variation of betweenness centrality, we identify Standard Liege as the best springboard club. | Identifying top football players and springboard clubs from a football
player collaboration and club transfer networks | 9,151 |
This work analyzes friendship network from a Massively Multiplayer Online Role-Playing Game (MMORPG). The network is based on data from a private server that was active from 2007 until 2011. The work conducts a standard analysis of the network and then divides players according to different groups based on their activity. Work checks how friendship network can be correlated to the clan (a self-organized group of players who often form a league and play on the same side in a match) network. Main part of the work is the recommendation method for players that are not part of any clan and it is based on communities of friendship network. | Analysis of friendship network from MMORPG based data | 9,152 |
With the popularity of OSNs, finding a set of most influential users (or nodes) so as to trigger the largest influence cascade is of significance. For example, companies may take advantage of the "word-of-mouth" effect to trigger a large cascade of purchases by offering free samples/discounts to those most influential users. This task is usually modeled as an influence maximization problem, and it has been widely studied in the past decade. However, considering that users in OSNs may participate in various kinds of online activities, e.g., giving ratings to products, joining discussion groups, etc., influence diffusion through online activities becomes even more significant. In this paper, we study the impact of online activities by formulating the influence maximization problem for social-activity networks (SANs) containing both users and online activities. To address the computation challenge, we define an influence centrality via random walks to measure influence, then use the Monte Carlo framework to efficiently estimate the centrality in SANs. Furthermore, we develop a greedy-based algorithm with two novel optimization techniques to find the most influential users. By conducting extensive experiments with real-world datasets, we show our approach is more efficient than the state-of-the-art algorithm IMM[17] when we needs to handle large amount of online activities. | Measuring and Maximizing Influence via Random Walk in Social Activity
Networks | 9,153 |
Given a stream of heterogeneous graphs containing different types of nodes and edges, how can we spot anomalous ones in real-time while consuming bounded memory? This problem is motivated by and generalizes from its application in security to host-level advanced persistent threat (APT) detection. We propose StreamSpot, a clustering based anomaly detection approach that addresses challenges in two key fronts: (1) heterogeneity, and (2) streaming nature. We introduce a new similarity function for heterogeneous graphs that compares two graphs based on their relative frequency of local substructures, represented as short strings. This function lends itself to a vector representation of a graph, which is (a) fast to compute, and (b) amenable to a sketched version with bounded size that preserves similarity. StreamSpot exhibits desirable properties that a streaming application requires---it is (i) fully-streaming; processing the stream one edge at a time as it arrives, (ii) memory-efficient; requiring constant space for the sketches and the clustering, (iii) fast; taking constant time to update the graph sketches and the cluster summaries that can process over 100K edges per second, and (iv) online; scoring and flagging anomalies in real time. Experiments on datasets containing simulated system-call flow graphs from normal browser activity and various attack scenarios (ground truth) show that our proposed StreamSpot is high-performance; achieving above 95% detection accuracy with small delay, as well as competitive time and memory usage. | Fast Memory-efficient Anomaly Detection in Streaming Heterogeneous
Graphs | 9,154 |
Modeling and predicting the popularity of online content is a significant problem for the practice of information dissemination, advertising, and consumption. Recent work analyzing massive datasets advances our understanding of popularity, but one major gap remains: To precisely quantify the relationship between the popularity of an online item and the external promotions it receives. This work supplies the missing link between exogenous inputs from public social media platforms, such as Twitter, and endogenous responses within the content platform, such as YouTube. We develop a novel mathematical model, the Hawkes intensity process, which can explain the complex popularity history of each video according to its type of content, network of diffusion, and sensitivity to promotion. Our model supplies a prototypical description of videos, called an endo-exo map. This map explains popularity as the result of an extrinsic factor - the amount of promotions from the outside world that the video receives, acting upon two intrinsic factors - sensitivity to promotion, and inherent virality. We use this model to forecast future popularity given promotions on a large 5-months feed of the most-tweeted videos, and found it to lower the average error by 28.6% from approaches based on popularity history. Finally, we can identify videos that have a high potential to become viral, as well as those for which promotions will have hardly any effect. | Expecting to be HIP: Hawkes Intensity Processes for Social Media
Popularity | 9,155 |
It is generally accepted as common wisdom that receiving social feedback is helpful to (i) keep an individual engaged with a community and to (ii) facilitate an individual's positive behavior change. However, quantitative data on the effect of social feedback on continued engagement in an online health community is scarce. In this work we apply Mahalanobis Distance Matching (MDM) to demonstrate the importance of receiving feedback in the "loseit" weight loss community on Reddit. Concretely we show that (i) even when correcting for differences in word choice, users receiving more positive feedback on their initial post are more likely to return in the future, and that (ii) there are diminishing returns and social feedback on later posts is less important than for the first post. We also give a description of the type of initial posts that are more likely to attract this valuable social feedback. Though we cannot yet argue about ultimate weight loss success or failure, we believe that understanding the social dynamics underlying online health communities is an important step to devise more effective interventions. | The Effect of Social Feedback in a Reddit Weight Loss Community | 9,156 |
Forecasting events like civil unrest movements, disease outbreaks, financial market movements and government elections from open source indicators such as news feeds and social media streams is an important and challenging problem. From the perspective of human analysts and policy makers, forecasting algorithms need to provide supporting evidence and identify the causes related to the event of interest. We develop a novel multiple instance learning based approach that jointly tackles the problem of identifying evidence-based precursors and forecasts events into the future. Specifically, given a collection of streaming news articles from multiple sources we develop a nested multiple instance learning approach to forecast significant societal events across three countries in Latin America. Our algorithm is able to identify news articles considered as precursors for a protest. Our empirical evaluation shows the strengths of our proposed approaches in filtering candidate precursors, forecasting the occurrence of events with a lead time and predicting the characteristics of different events in comparison to several other formulations. We demonstrate through case studies the effectiveness of our proposed model in filtering the candidate precursors for inspection by a human analyst. | Modeling Precursors for Event Forecasting via Nested Multi-Instance
Learning | 9,157 |
An increasing number of people use wearables and other smart devices to quantify various health conditions, ranging from sleep patterns, to body weight, to heart rates. Of these Quantified Selfs many choose to openly share their data via online social networks such as Twitter and Facebook. In this study, we use data for users who have chosen to connect their smart scales to Twitter, providing both a reliable time series of their body weight, as well as insights into their social surroundings and general online behavior. Concretely, we look at which social media features are predictive of physical status, such as body weight at the individual level, and activity patterns at the population level. We show that it is possible to predict an individual's weight using their online social behaviors, such as their self-description and tweets. Weekly and monthly patterns of quantified-self behaviors are also discovered. These findings could contribute to building models to monitor public health and to have more customized personal training interventions. While there are many studies using either quantified self or social media data in isolation, this is one of the few that combines the two data sources and, to the best of our knowledge, the only one that uses public data. | Quantified Self Meets Social Media: Sharing of Weight Updates on Twitter | 9,158 |
Understanding the demographics of app users is crucial, for example, for app developers, who wish to target their advertisements more effectively. Our work addresses this need by studying the predictability of user demographics based on the list of a user's apps which is readily available to many app developers. We extend previous work on the problem on three frontiers: (1) We predict new demographics (age, race, and income) and analyze the most informative apps for four demographic attributes included in our analysis. The most predictable attribute is gender (82.3 % accuracy), whereas the hardest to predict is income (60.3 % accuracy). (2) We compare several dimensionality reduction methods for high-dimensional app data, finding out that an unsupervised method yields superior results compared to aggregating the apps at the app category level, but the best results are obtained simply by the raw list of apps. (3) We look into the effect of the training set size and the number of apps on the predictability and show that both of these factors have a large impact on the prediction accuracy. The predictability increases, or in other words, a user's privacy decreases, the more apps the user has used, but somewhat surprisingly, after 100 apps, the prediction accuracy starts to decrease. | You Are What Apps You Use: Demographic Prediction Based on User's Apps | 9,159 |
What food is so good as to be considered pornographic? Worldwide, the popular #foodporn hashtag has been used to share appetizing pictures of peoples' favorite culinary experiences. But social scientists ask whether #foodporn promotes an unhealthy relationship with food, as pornography would contribute to an unrealistic view of sexuality. In this study, we examine nearly 10 million Instagram posts by 1.7 million users worldwide. An overwhelming (and uniform across the nations) obsession with chocolate and cake shows the domination of sugary dessert over local cuisines. Yet, we find encouraging traits in the association of emotion and health-related topics with #foodporn, suggesting food can serve as motivation for a healthy lifestyle. Social approval also favors the healthy posts, with users posting with healthy hashtags having an average of 1,000 more followers than those with unhealthy ones. Finally, we perform a demographic analysis which shows nation-wide trends of behavior, such as a strong relationship (r=0.51) between the GDP per capita and the attention to healthiness of their favorite food. Our results expose a new facet of food "pornography", revealing potential avenues for utilizing this precarious notion for promoting healthy lifestyles. | Fetishizing Food in Digital Age: #foodporn Around the World | 9,160 |
Spectral clustering and co-clustering are well-known techniques in data analysis, and recent work has extended spectral clustering to square, symmetric tensors and hypermatrices derived from a network. We develop a new tensor spectral co-clustering method that applies to any non-negative tensor of data. The result of applying our method is a simultaneous clustering of the rows, columns, and slices of a three-mode tensor, and the idea generalizes to any number of modes. The algorithm we design works by recursively bisecting the tensor into two pieces. We also design a new measure to understand the role of each cluster in the tensor. Our new algorithm and pipeline are demonstrated in both synthetic and real-world problems. On synthetic problems with a planted higher-order cluster structure, our method is the only one that can reliably identify the planted structure in all cases. On tensors based on n-gram text data, we identify stop-words and semantically independent sets; on tensors from an airline-airport multimodal network, we find worldwide and regional co-clusters of airlines and airports; and on tensors from an email network, we identify daily-spam and focused-topic sets. | General Tensor Spectral Co-clustering for Higher-Order Data | 9,161 |
Online consumer reviews reflect the testimonials of real people, unlike advertisements. As such, they have critical impact on potential consumers, and indirectly on businesses. According to a Harvard study (Luca 2011), +1 rise in star-rating increases revenue by 5-9%. Problematically, such financial incentives have created a market for spammers to fabricate reviews, to unjustly promote or demote businesses, activities known as opinion spam (Jindal and Liu 2008). A vast majority of existing work on this problem have formulations based on static review data, with respective techniques operating in an offline fashion. Spam campaigns, however, are intended to make most impact during their course. Abnormal events triggered by spammers' activities could be masked in the load of future events, which static analysis would fail to identify. In this work, we approach the opinion spam problem with a temporal formulation. Specifically, we monitor a list of carefully selected indicative signals of opinion spam over time and design efficient techniques to both detect and characterize abnormal events in real-time. Experiments on datasets from two different review sites show that our approach is fast, effective, and practical to be deployed in real-world systems. | Temporal Opinion Spam Detection by Multivariate Indicative Signals | 9,162 |
Demographics, in particular, gender, age, and race, are a key predictor of human behavior. Despite the significant effect that demographics plays, most scientific studies using online social media do not consider this factor, mainly due to the lack of such information. In this work, we use state-of-the-art face analysis software to infer gender, age, and race from profile images of 350K Twitter users from New York. For the period from November 1, 2014 to October 31, 2015, we study which hashtags are used by different demographic groups. Though we find considerable overlap for the most popular hashtags, there are also many group-specific hashtags. | #greysanatomy vs. #yankees: Demographics and Hashtag Use on Twitter | 9,163 |
In this paper, we mine and learn to predict how similar a pair of users' interests towards videos are, based on demographic (age, gender and location) and social (friendship, interaction and group membership) information of these users. We use the video access patterns of active users as ground truth (a form of benchmark). We adopt tag-based user profiling to establish this ground truth, and justify why it is used instead of video-based methods, or many latent topic models such as LDA and Collaborative Filtering approaches. We then show the effectiveness of the different demographic and social features, and their combinations and derivatives, in predicting user interest similarity, based on different machine-learning methods for combining multiple features. We propose a hybrid tree-encoded linear model for combining the features, and show that it out-performs other linear and treebased models. Our methods can be used to predict user interest similarity when the ground-truth is not available, e.g. for new users, or inactive users whose interests may have changed from old access data, and is useful for video recommendation. Our study is based on a rich dataset from Tencent, a popular service provider of social networks, video services, and various other services in China. | Who are Like-minded: Mining User Interest Similarity in Online Social
Networks | 9,164 |
One major feature of social networks (e.g., massive online social networks) is the dissemination of information, such as news, rumors and opinions. Information can be propagated via natural connections in written, oral or electronic forms. The physics of information diffusion has been changed with the mainstream adoption of the Internet and Web. Until a few years ago, the major barrier for someone who wanted a piece of information to spread through a community was the cost of the technical infrastructure required to reach a large number of people. Today, with widespread access to the Internet, this bottleneck has largely been removed. Information diffusion has been one of the focuses in social network research area, due to its importance in social interactions and everyday life. More recently, during the last twenty to thirty years, there has been interest and attention not just in observing information and innovation flow, but also in influencing and creating them. Modeling information diffusion in networks enables us to reason about its spread. | Modeling Information Diffusion in Social Networks | 9,165 |
Online social networks (OSNs) are increasingly threatened by social bots which are software-controlled OSN accounts that mimic human users with malicious intentions. A social botnet refers to a group of social bots under the control of a single botmaster, which collaborate to conduct malicious behavior while mimicking the interactions among normal OSN users to reduce their individual risk of being detected. We demonstrate the effectiveness and advantages of exploiting a social botnet for spam distribution and digital-influence manipulation through real experiments on Twitter and also trace-driven simulations. We also propose the corresponding countermeasures and evaluate their effectiveness. Our results can help understand the potentially detrimental effects of social botnets and help OSNs improve their bot(net) detection systems. | The Rise of Social Botnets: Attacks and Countermeasures | 9,166 |
Understanding the usage of multiple OSNs (Online Social Networks) has been of significant research interest as it helps in identifying the unique and distinguishing trait in each social media platform that contributes to its continued existence. The comparison between the OSNs is insightful when it is done based on the representative majority of the users holding active accounts on all the platforms. In this research, we collected a set of user profiles holding accounts on both Twitter and Instagram, these platforms being of prominence among a majority of users. An extensive textual and visual analysis on the media content posted by these users revealed that both these platforms are indeed perceived differently at a fundamental level with Instagram engaging more of the users' heart and Twitter capturing more of their mind. These differences got reflected in almost every microscopic analysis done upon the linguistic, topical and visual aspects. | Tweeting the Mind and Instagramming the Heart: Exploring Differentiated
Content Sharing on Social Media | 9,167 |
In this paper, we study follower demographics of Donald Trump and Hillary Clinton, the two leading candidates in the 2016 U.S. presidential race. We build a unique dataset US2016, which includes the number of followers for each candidate from September 17, 2015 to December 22, 2015. US2016 also includes the geographical location of these followers, the number of their own followers and, very importantly, the profile image of each follower. We use individuals' number of followers and profile images to analyze four dimensions of follower demographics: social status, gender, race and age. Our study shows that in terms of social influence, the Trumpists are more polarized than the Clintonists: they tend to have either a lot of influence or little influence. We also find that compared with the Clintonists, the Trumpists are more likely to be either very young or very old. Our study finds no gender affinity effect for Clinton in the Twitter sphere, but we do find that the Clintonists are more racially diverse. | Deciphering the 2016 U.S. Presidential Campaign in the Twitter Sphere: A
Comparison of the Trumpists and Clintonists | 9,168 |
In this paper, we propose a framework to infer the topic preferences of Donald Trump's followers on Twitter. We first use latent Dirichlet allocation (LDA) to derive the weighted mixture of topics for each Trump tweet. Then we use negative binomial regression to model the "likes," with the weights of each topic serving as explanatory variables. Our study shows that attacking Democrats such as President Obama and former Secretary of State Hillary Clinton earns Trump the most "likes." Our framework of inference is generalizable to the study of other politicians. | Catching Fire via "Likes": Inferring Topic Preferences of Trump
Followers on Twitter | 9,169 |
Borgs et al. [2016] investigated essential requirements for communities in preference networks. They defined six axioms on community functions, i.e., community detection rules. Though having elegant properties, the practicality of this axiom system is compromised by the intractability of checking two critical axioms, so no nontrivial consistent community function was reported inBorgs et al. [2016] By adapting the two axioms in a natural way, we propose two new axioms that are efficiently-checkable. We show that most of the desirable properties of the original axiom system are preserved. More importantly, the new axioms provide a general approach to constructing consistent community functions. We further find a natural consistent community function that is also enumerable and samplable, answering an open problem in the literature. | Communities in Preference Networks: Refined Axioms and Beyond | 9,170 |
In this document we describe the size of the Poblacion Flotante of Bogota (D.C.). The Poblacion Flotante is composed by people who live outside Bogota (D.C.), but who rely on the city for performing their job. We estimate the Poblacion Flotante impact relying on a new data source provided by telecommunications operators in Colombia, which enables us to estimate how many people commute daily from every municipality of Colombia to a specific area of Bogota (D.C.). We estimate that the size of the Poblacion Flotante could represent a 5.4% increase of Bogota (D.C.)'s population. During weekdays, the commuters tend to visit the city center more. | Report on the Poblacion Flotante of Bogota (D.C.) | 9,171 |
Websites have an inherent interest in steering user navigation in order to, for example, increase sales of specific products or categories, or to guide users towards specific information. In general, website administrators can use the following two strategies to influence their visitors' navigation behavior. First, they can introduce click biases to reinforce specific links on their website by changing their visual appearance, for example, by locating them on the top of the page. Second, they can utilize link insertion to generate new paths for users to navigate over. In this paper, we present a novel approach for measuring the potential effects of these two strategies on user navigation. Our results suggest that, depending on the pages for which we want to increase user visits, optimal link modification strategies vary. Moreover, simple topological measures can be used as proxies for assessing the impact of the intended changes on the navigation of users, even before these changes are implemented. | Assessing the Navigational Effects of Click Biases and Link Insertion on
the Web | 9,172 |
Without sufficient preparation and on-site management, the mass scale unexpected huge human crowd is a serious threat to public safety. A recent impressive tragedy is the 2014 Shanghai Stampede, where 36 people were killed and 49 were injured in celebration of the New Year's Eve on December 31th 2014 in the Shanghai Bund. Due to the innately stochastic and complicated individual movement, it is not easy to predict collective gatherings, which potentially leads to crowd events. In this paper, with leveraging the big data generated on Baidu map, we propose a novel approach to early warning such potential crowd disasters, which has profound public benefits. An insightful observation is that, with the prevalence and convenience of mobile map service, users usually search on the Baidu map to plan a routine. Therefore, aggregating users' query data on Baidu map can obtain priori and indication information for estimating future human population in a specific area ahead of time. Our careful analysis and deep investigation on the Baidu map data on various events also demonstrates a strong correlation pattern between the number of map query and the number of positioning users in an area. Based on such observation, we propose a decision method utilizing query data on Baidu map to invoke warnings for potential crowd events about 1-3 hours in advance. Then we also construct a machine learning model with heterogeneous data (such as query data and mobile positioning data) to quantitatively measure the risk of the potential crowd disasters. We evaluate the effectiveness of our methods on the data of Baidu map. | Early Warning of Human Crowds Based on Query Data from Baidu Map:
Analysis Based on Shanghai Stampede | 9,173 |
Online communities provide a fertile ground for analyzing people's behavior and improving our understanding of social processes. Because both people and communities change over time, we argue that analyses of these communities that take time into account will lead to deeper and more accurate results. Using Reddit as an example, we study the evolution of users based on comment and submission data from 2007 to 2014. Even using one of the simplest temporal differences between users---yearly cohorts---we find wide differences in people's behavior, including comment activity, effort, and survival. Further, not accounting for time can lead us to misinterpret important phenomena. For instance, we observe that average comment length decreases over any fixed period of time, but comment length in each cohort of users steadily increases during the same period after an abrupt initial drop, an example of Simpson's Paradox. Dividing cohorts into sub-cohorts based on the survival time in the community provides further insights; in particular, longer-lived users start at a higher activity level and make more and shorter comments than those who leave earlier. These findings both give more insight into user evolution in Reddit in particular, and raise a number of interesting questions around studying online behavior going forward. | Averaging Gone Wrong: Using Time-Aware Analyses to Better Understand
Behavior | 9,174 |
Social media systems allow Internet users a congenial platform to freely express their thoughts and opinions. Although this property represents incredible and unique communication opportunities, it also brings along important challenges. Online hate speech is an archetypal example of such challenges. Despite its magnitude and scale, there is a significant gap in understanding the nature of hate speech on social media. In this paper, we provide the first of a kind systematic large scale measurement study of the main targets of hate speech in online social media. To do that, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both these systems. Our results identify online hate speech forms and offer a broader understanding of the phenomenon, providing directions for prevention and detection approaches. | Analyzing the Targets of Hate in Online Social Media | 9,175 |
Social media platforms provide several social interactional features. Due to the large scale reach of social media, these interactional features help enable various types of political discourse. Constructive and diversified discourse is important for sustaining healthy communities and reducing the impact of echo chambers. In this paper, we empirically examine the role of a newly introduced Twitter feature, 'quote retweets' (or 'quote RTs') in political discourse, specifically whether it has led to improved, civil, and balanced exchange. Quote RTs allow users to quote the tweet they retweet, while adding a short comment. Our analysis using content, network and crowd labeled data indicates that the feature has increased political discourse and its diffusion, compared to existing features. We discuss the implications of our findings in understanding and reducing online polarization. | Quote RTs on Twitter: Usage of the New Feature for Political Discourse | 9,176 |
In this paper, we analyze the growth patterns of Donald Trump's followers (Trumpists, henceforth) on Twitter. We first construct a random walk model with a time trend to study the growth trend and the effects of public debates. We then analyze the relationship between Trump's activity on Twitter and the growth of his followers. Thirdly, we analyze the effects of such controversial events as calling for Muslim ban and his 'schlonged' remark. | To Follow or Not to Follow: Analyzing the Growth Patterns of the
Trumpists on Twitter | 9,177 |
Recently, there is a surge of interest in using point processes to model continuous-time user activities. This framework has resulted in novel models and improved performance in diverse applications. However, most previous works focus on the "open loop" setting where learned models are used for predictive tasks. Typically, we are interested in the "closed loop" setting where a policy needs to be learned to incorporate user feedbacks and guide user activities to desirable states. Although point processes have good predictive performance, it is not clear how to use them for the challenging closed loop activity guiding task. In this paper, we propose a framework to reformulate point processes into stochastic differential equations, which allows us to extend methods from stochastic optimal control to address the activity guiding problem. We also design an efficient algorithm, and show that our method guides user activities to desired states more effectively than the state of the art. | A Stochastic Differential Equation Framework for Guiding Online User
Activities in Closed Loop | 9,178 |
Tumblr is one of the largest and most popular microblogging website on the Internet. Studies shows that due to high reachability among viewers, low publication barriers and social networking connectivity, microblogging websites are being misused as a platform to post hateful speech and recruiting new members by existing extremist groups. Manual identification of such posts and communities is overwhelmingly impractical due to large amount of posts and blogs being published every day. We propose a topic based web crawler primarily consisting of multiple phases: training a text classifier model consisting examples of only hate promoting users, extracting posts of an unknown tumblr micro-blogger, classifying hate promoting bloggers based on their activity feeds, crawling through the external links to other bloggers and performing a social network analysis on connected extremist bloggers. To investigate the effectiveness of our approach, we conduct experiments on large real world dataset. Experimental results reveals that the proposed approach is an effective method and has an F-score of 0.80. We apply social network analysis based techniques and identify influential and core bloggers in a community. | Spider and the Flies : Focused Crawling on Tumblr to Detect Hate
Promoting Communities | 9,179 |
Besides finding trends and unveiling typical patterns, modern information retrieval is increasingly more interested in the discovery of surprising information in textual datasets. In this work we focus on finding "unexpected links" in hyperlinked document corpora when documents are assigned to categories. To achieve this goal, we model the hyperlinks graph through node categories: the presence of an arc is fostered or discouraged by the categories of the head and the tail of the arc. Specifically, we determine a latent category matrix that explains common links. The matrix is built using a margin-based online learning algorithm (Passive-Aggressive), which makes us able to process graphs with $10^{8}$ links in less than $10$ minutes. We show that our method provides better accuracy than most existing text-based techniques, with higher efficiency and relying on a much smaller amount of information. It also provides higher precision than standard link prediction, especially at low recall levels; the two methods are in fact shown to be orthogonal to each other and can therefore be fruitfully combined. | LlamaFur: Learning Latent Category Matrix to Find Unexpected Relations
in Wikipedia | 9,180 |
Do users from Carnegie Mellon University form social communities on Facebook? Do signal processing researchers from tightly collaborate with each other? Do Chinese restaurants in Manhattan cluster together? These seemingly different problems share a common structure: an attribute that may be localized on a graph. In other words, nodes activated by an attribute form a subgraph that can be easily separated from other nodes. In this paper, we thus focus on the task of detecting localized attributes on a graph. We are particularly interested in categorical attributes such as attributes in online social networks, ratings in recommender systems and viruses in cyber-physical systems because they are widely used in numerous data mining applications. To solve the task, we formulate a statistical hypothesis testing problem to decide whether a given attribute is localized or not. We propose two statistics: graph wavelet statistic and graph scan statistic, both of which are provably effective in detecting localized attributes. We validate the robustness of the proposed statistics on both simulated data and two real-world applications: high air-pollution detection and keyword ranking in a co-authorship network collected from IEEE Xplore. Experimental results show that the proposed graph wavelet statistic and graph scan statistic are effective and efficient. | Detecting Localized Categorical Attributes on Graphs | 9,181 |
Community structures detection in complex network is important for understanding not only the topological structures of the network, but also the functions of it. Stochastic block model and nonnegative matrix factorization are two widely used methods for community detection, which are proposed from different perspectives. In this paper, the relations between them are studied. The logarithm of likelihood function for stochastic block model can be reformulated under the framework of nonnegative matrix factorization. Besides the model equivalence, the algorithms employed by the two methods are different. Preliminary numerical experiments are carried out to compare the behaviors of the algorithms. | On Equivalence of Likelihood Maximization of Stochastic Block Model and
Constrained Nonnegative Matrix Factorization | 9,182 |
The social networking era has left us with little privacy. The details of the social network users are published on Social Networking sites. Vulnerability has reached new heights due to the overpowering effects of social networking. The sites like Facebook, Twitter are having a huge set of users who publish their files, comments, messages in other users walls. These messages and comments could be of any nature. Even friends could post a comment that would harm a persons integrity. Thus there has to be a system which will monitor the messages and comments that are posted on the walls. If the messages are found to be neutral (does not have any harmful content), then it can be published. If the messages are found to have non-neutral content in them, then these messages would be blocked by the social network manager. The messages that are non-neutral would be of sexual, offensive, hatred, pun intended nature. Thus the social network manager can classify content as neutral and non-neutral and notify the user if there seems to be messages of non-neutral behavior. | System for Filtering Messages on Social Media Content | 9,183 |
Drug use by people is on the rise and is of great interest to public health agencies and law enforcement agencies. As found by the National Survey on Drug Use and Health, 20 million Americans aged 12 years or older consumed illicit drugs in the past few 30 days. Given their ubiquity in everyday life, drug abuse related studies have received much and constant attention. However, most of the existing studies rely on surveys. Surveys present a fair number of problems because of their nature. Surveys on sensitive topics such as illicit drug use may not be answered truthfully by the people taking them. Selecting a representative sample to survey is another major challenge. In this paper, we explore the possibility of using big data from social media in order to understand illicit drug use behaviors. Instagram posts are collected using drug related terms by analyzing the hashtags supplied with each post. A large and dynamic dictionary of frequent illicit drug related slang is used to find these posts. These posts are studied to find common drug consumption behaviors with regard to time of day and week. Furthermore, by studying the accounts followed by the users of drug related posts, we hope to discover common interests shared by drug users. | Understanding Illicit Drug Use Behaviors by Mining Social Media | 9,184 |
From a crowded field with 17 candidates, Hillary Clinton and Donald Trump have emerged as the two front-runners in the 2016 U.S. presidential campaign. The two candidates each boast more than 5 million followers on Twitter, and at the same time both have witnessed hundreds of thousands of people leave their camps. In this paper we attempt to characterize individuals who have left Hillary Clinton and Donald Trump between September 2015 and March 2016. Our study focuses on three dimensions of social demographics: social capital, gender, and age. Within each camp, we compare the characteristics of the current followers with former followers, i.e., individuals who have left since September 2015. We use the number of followers to measure social capital, and profile images to infer gender and age. For classifying gender, we train a convolutional neural network (CNN). For age, we use the Face++ API. Our study shows that for both candidates followers with more social capital are more likely to leave (or switch camps). For both candidates females make up a larger presence among unfollowers than among current followers. Somewhat surprisingly, the effect is particularly pronounced for Clinton. Lastly, middle-aged individuals are more likely to leave Trump, and the young are more likely to leave Hillary Clinton. | Voting with Feet: Who are Leaving Hillary Clinton and Donald Trump? | 9,185 |
The article presents a study of some characteristics of post and comment publishing in the Russian segment of Facebook. A number of non-trivial results has been obtained. For example, a significant anomaly has been detected in the number of user accounts with the rate of publishing posts of approximately two posts per three days. The analysis has been carried out at the level of basic characteristics that are shared by most social media platforms. It makes possible a direct comparison of obtained results with data from other platforms. The article presents an approach to formalization and ordering of structural and informational elements on social media platforms. The approach is based on the representation of these structural elements in the form of a coherent hierarchy of container objects and their relations. This method allows to structure and analyze raw data from different social media platforms in a unified algorithmic design. The described approach is more formal, universal and constructive than other known approaches. | The dynamics of publishing of posts and comments on facebook (the
russian segment, the first five months of 2013) | 9,186 |
Faced with the challenge of attracting user attention and revenue, social media websites have turned to video advertisements (video-ads). While in traditional media the video-ad market is mostly based on an interaction between content providers and marketers, the use of video-ads in social media has enabled a more complex interaction, that also includes content creator and viewer preferences. To better understand this novel setting, we present the first data-driven analysis of video-ad exhibitions on YouTube. | Understanding Video-Ad Consumption on YouTube: A Measurement Study on
User Behavior, Popularity, and Content Properties | 9,187 |
A social tagging system allows users to add arbitrary strings, called "tags", on a shared resource to organize and manage information. The Yule--Simon process, which has shown the ability to capture the population dynamics of social tagging behavior, does not handle the mechanism of new vocabulary creation because it assumes that new vocabulary creation is a Poisson-like random process. In this research, we focus on the mechanism of vocabulary creation from the microscopic perspective and discuss whether it also follows the random process assumed in the Yule--Simon process. To capture the microscopic mechanism of vocabulary creation, we focus on the relationship between the number of tags used in the same entry and the local vocabulary creation rate. We find that the relationship is not the result of a simple random process, and differs between services. Furthermore, these differences depend on whether the user's tagging attitudes are private or open. These results provide the potential for a new index to identify the service's intrinsic nature. | How the nature of web services drives vocabulary creation in social
tagging | 9,188 |
Influential users play an important role in online social networks since users tend to have an impact on one other. Therefore, the proposed work analyzes users and their behavior in order to identify influential users and predict user participation. Normally, the success of a social media site is dependent on the activity level of the participating users. For both online social networking sites and individual users, it is of interest to find out if a topic will be interesting or not. In this article, we propose association learning to detect relationships between users. In order to verify the findings, several experiments were executed based on social network analysis, in which the most influential users identified from association rule learning were compared to the results from Degree Centrality and Page Rank Centrality. The results clearly indicate that it is possible to identify the most influential users using association rule learning. In addition, the results also indicate a lower execution time compared to state-of-the-art methods. | Finding Influential Users in Social Media Using Association Rule
Learning | 9,189 |
Most previous work on influence maximization in social networks is limited to the non-adaptive setting in which the marketer is supposed to select all of the seed users, to give free samples or discounts to, up front. A disadvantage of this setting is that the marketer is forced to select all the seeds based solely on a diffusion model. If some of the selected seeds do not perform well, there is no opportunity to course-correct. A more practical setting is the adaptive setting in which the marketer initially selects a batch of users and observes how well seeding those users leads to a diffusion of product adoptions. Based on this market feedback, she formulates a policy for choosing the remaining seeds. In this paper, we study adaptive offline strategies for two problems: (a) MAXSPREAD -- given a budget on number of seeds and a time horizon, maximize the spread of influence and (b) MINTSS -- given a time horizon and an expected number of target users to be influenced, minimize the number of seeds that will be required. In particular, we present theoretical bounds and empirical results for an adaptive strategy and quantify its practical benefit over the non-adaptive strategy. We evaluate adaptive and non-adaptive policies on three real data sets. We conclude that while benefit of going adaptive for the MAXSPREAD problem is modest, adaptive policies lead to significant savings for the MINTSS problem. | Adaptive Influence Maximization in Social Networks: Why Commit when You
can Adapt? | 9,190 |
Exploring small connected and induced subgraph patterns (CIS patterns, or graphlets) has recently attracted considerable attention. Despite recent efforts on computing the number of instances a specific graphlet appears in a large graph (i.e., the total number of CISes isomorphic to the graphlet), little attention has been paid to characterizing a node's graphlet degree, i.e., the number of CISes isomorphic to the graphlet that include the node, which is an important metric for analyzing complex networks such as social and biological networks. Similar to global graphlet counting, it is challenging to compute node graphlet degrees for a large graph due to the combinatorial nature of the problem. Unfortunately, previous methods of computing global graphlet counts are not suited to solve this problem. In this paper we propose sampling methods to estimate node graphlet degrees for undirected and directed graphs, and analyze the error of our estimates. To the best of our knowledge, we are the first to study this problem and give a fast scalable solution. We conduct experiments on a variety of real-word datasets that demonstrate that our methods accurately and efficiently estimate node graphlet degrees for graphs with millions of edges. | A Fast Sampling Method of Exploring Graphlet Degrees of Large Directed
and Undirected Graphs | 9,191 |
Detecting and preventing outbreaks of mosquito-borne diseases such as Dengue and Zika in Brasil and other tropical regions has long been a priority for governments in affected areas. Streaming social media content, such as Twitter, is increasingly being used for health vigilance applications such as flu detection. However, previous work has not addressed the complexity of drastic seasonal changes on Twitter content across multiple epidemic outbreaks. In order to address this gap, this paper contrasts two complementary approaches to detecting Twitter content that is relevant for Dengue outbreak detection, namely supervised classification and unsupervised clustering using topic modelling. Each approach has benefits and shortcomings. Our classifier achieves a prediction accuracy of about 80\% based on a small training set of about 1,000 instances, but the need for manual annotation makes it hard to track seasonal changes in the nature of the epidemics, such as the emergence of new types of virus in certain geographical locations. In contrast, LDA-based topic modelling scales well, generating cohesive and well-separated clusters from larger samples. While clusters can be easily re-generated following changes in epidemics, however, this approach makes it hard to clearly segregate relevant tweets into well-defined clusters. | Tracking Dengue Epidemics using Twitter Content Classification and Topic
Modelling | 9,192 |
We propose an analytical framework able to investigate discussions about polarized topics in online social networks from many different angles. The framework supports the analysis of social networks along several dimensions: time, space and sentiment. We show that the proposed analytical framework and the methodology can be used to mine knowledge about the perception of complex social phenomena. We selected the refugee crisis discussions over Twitter as the case study. This difficult and controversial topic is an increasingly important issue for the EU. The raw stream of tweets is enriched with space information (user and mentioned locations), and sentiment (positive vs. negative) w.r.t. refugees. Our study shows differences in positive and negative sentiment in EU countries, in particular in UK, and by matching events, locations and perception it underlines opinion dynamics and common prejudices regarding the refugees. | Sentiment-enhanced Multidimensional Analysis of Online Social Networks:
Perception of the Mediterranean Refugees Crisis | 9,193 |
The central problems in social sciences concern the social and psychological mechanisms and conditions required for the emergence and stability of human groups. The present article is dedicated to the problem of stability of human groups. We model human groups using local interacting systems of automaton with relations and reactions and using the structural balance theory. The 'structural balance theory' ties the emergence of a human group with the human actor's thoughts about how another actor treats him and his perception of actors. The Cartwright and Harary formalization the concept of balance theory within a graph theoretical setting unable to get a number of mathematical results pertaining to an algebraic formulation of the theory of balance in signed networks/graphs. The deeper generalization of 'balance theory' as the smooth product-potential fields on domain gives us the ability to create theory of 'smooth product potential social fields'. We then find that all discrete product-potential system tightly connect with other process - process multiplication on the randomly chosen matrices and we find connections between stationary measures and some algebraic objects. | Modeling of human society as a locally interacting product-potential
networks of automaton | 9,194 |
Illicit drug trade via social media sites, especially photo-oriented Instagram, has become a severe problem in recent years. As a result, tracking drug dealing and abuse on Instagram is of interest to law enforcement agencies and public health agencies. In this paper, we propose a novel approach to detecting drug abuse and dealing automatically by utilizing multimodal data on social media. This approach also enables us to identify drug-related posts and analyze the behavior patterns of drug-related user accounts. To better utilize multimodal data on social media, multimodal analysis methods including multitask learning and decision-level fusion are employed in our framework. Experiment results on expertly labeled data have demonstrated the effectiveness of our approach, as well as its scalability and reproducibility over labor-intensive conventional approaches. | Tracking Illicit Drug Dealing and Abuse on Instagram using Multimodal
Analysis | 9,195 |
In this paper, we investigate the profit-driven team grouping problem in social networks. We consider a setting in which people possess different skills, and the compatibility between these individuals is captured by a social network. Moreover, there is a collection of tasks, where each task requires a specific set of skills and yields a profit upon completion. Individuals may collaborate with each other as \emph{teams} to accomplish a set of tasks. We aim to find a group of teams to maximize the total profit of the tasks that they can complete. Any feasible grouping must satisfy the following conditions: (i) each team possesses all the skills required by the task assigned to it, (ii) individuals belonging to the same team are socially compatible, and (iii) no individual is overloaded. We refer to this as the \textsc{TeamGrouping} problem. We analyze the computational complexity of this problem and then propose a linear program-based approximation algorithm to address it and its variants. Although we focus on team grouping, our results apply to a broad range of optimization problems that can be formulated as cover decomposition problems. | Profit-Driven Team Grouping in Social Networks | 9,196 |
Social networking services like Twitter have been playing an import role in people's daily life since it supports new ways of communicating effectively and sharing information. The advantages of these social network services enable them rapidly growing. However, the rise of social network services is leading to the increase of unwanted, disruptive information from spammers, malware discriminators, and other content polluters. Negative effects of social spammers do not only annoy users, but also lead to financial loss and privacy issues. There are two main challenges of spammer detection on Twitter. Firstly, the data of social network scale with a huge volume of streaming social data. Secondly, spammers continually change their spamming strategy such as changing content patterns or trying to gain social influence, disguise themselves as far as possible. With those challenges, it is hard to directly apply traditional batch learning methods to quickly adapt newly spamming pattern in the high-volume and real-time social media data. We need an anti-spammer system to be able to adjust the learning model when getting a label feedback. Moreover, the data on social media may be unbounded. Then, the system must allow update efficiency model in both computation and memory requirements. Online learning is an ideal solution for this problem. These methods incrementally adapt the learning model with every single feedback and adjust to the changing patterns of spammers overtime. Our experiments demonstrate that an anti-spam system based on online learning approach is efficient in fast changing of spammers comparing with batch learning methods. We also attempt to find the optimal online learning method and study the effectiveness of various feature sets on these online learning methods. | Online learning for Social Spammer Detection on Twitter | 9,197 |
Most studies on influence maximization focus on one-shot propagation, i.e. the influence is propagated from seed users only once following a probabilistic diffusion model and users' activation are determined via single cascade. In reality it is often the case that a user needs to be cumulatively impacted by receiving enough pieces of information propagated to her before she makes the final purchase decision. In this paper we model such cumulative activation as the following process: first multiple pieces of information are propagated independently in the social network following the classical independent cascade model, then the user will be activated (and adopt the product) if the cumulative pieces of information she received reaches her cumulative activation threshold. Two optimization problems are investigated under this framework: seed minimization with cumulative activation (SM-CA), which asks how to select a seed set with minimum size such that the number of cumulatively active nodes reaches a given requirement $\eta$; influence maximization with cumulative activation (IM-CA), which asks how to choose a seed set with fixed budget to maximize the number of cumulatively active nodes. For SM-CA problem, we design a greedy algorithm that yields a bicriteria $O(\ln n)$-approximation when $\eta=n$, where $n$ is the number of nodes in the network. For both SM-CA problem with $\eta<n$ and IM-CA problem, we prove strong inapproximability results. Despite the hardness results, we propose two efficient heuristic algorithms for SM-CA and IM-CA respectively based on the reverse reachable set approach. Experimental results on different real-world social networks show that our algorithms significantly outperform baseline algorithms. | Cumulative Activation in Social Networks | 9,198 |
Many information systems use tags and keywords to describe and annotate content. These allow for efficient organization and categorization of items, as well as facilitate relevant search queries. As such, the selected set of tags for an item can have a considerable effect on the volume of traffic that eventually reaches an item. In settings where tags are chosen by an item's creator, who in turn is interested in maximizing traffic, a principled approach for choosing tags can prove valuable. In this paper we introduce the problem of optimal tagging, where the task is to choose a subset of tags for a new item such that the probability of a browsing user reaching that item is maximized. We formulate the problem by modeling traffic using a Markov chain, and asking how transitions in this chain should be modified to maximize traffic into a certain state of interest. The resulting optimization problem involves maximizing a certain function over subsets, under a cardinality constraint. We show that the optimization problem is NP-hard, but nonetheless has a simple (1-1/e)-approximation via a simple greedy algorithm. Furthermore, the structure of the problem allows for an efficient implementation of the greedy step.To demonstrate the effectiveness of our method, we perform experiments on three tagging datasets, and show that the greedy algorithm outperforms other baselines. | Optimal Tagging with Markov Chain Optimization | 9,199 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.