text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
Communities are an important feature of social networks. The goal of this paper is to propose a mathematical model to study the community structure in social networks. For this, we consider a particular case of a social network, namely information networks. We assume that there is a population of agents who are interested in obtaining content. Agents differ in the type of content they are interested in. The goal of agents is to form communities in order to maximize their utility for obtaining and producing content. We use this model to characterize the structure of communities that emerge as a Nash equilibrium in this setting. The work presented in this paper generalizes results in the literature that were obtained for the case of a continuous agent model, to the case of a discrete agent population model. We note that a discrete agent set reflects more accurately real-life information networks, and are needed in order to get additional insights into the community structure, such as for example the connectivity (graph structure) within in a community, as well as information dissemination within a community. | Community Structures in Information Networks for a Discrete Agent
Population | 9,800 |
With the world-wide development of 2019 novel coronavirus, although WHO has officially announced the disease as COVID-19, one controversial term - "Chinese Virus" is still being used by a great number of people. In the meantime, global online media coverage about COVID-19-related racial attacks increases steadily, most of which are anti-Chinese or anti-Asian. As this pandemic becomes increasingly severe, more people start to talk about it on social media platforms such as Twitter. When they refer to COVID-19, there are mainly two ways: using controversial terms like "Chinese Virus" or "Wuhan Virus", or using non-controversial terms like "Coronavirus". In this study, we attempt to characterize the Twitter users who use controversial terms and those who use non-controversial terms. We use the Tweepy API to retrieve 17 million related tweets and the information of their authors. We find significant differences between these two groups of Twitter users across their demographics, user-level features like the number of followers, political following status, as well as their geo-locations. Moreover, we apply classification models to predict Twitter users who are more likely to use controversial terms. To our best knowledge, this is the first large-scale social media-based study to characterize users with respect to their usage of controversial terms during a major crisis. | Sense and Sensibility: Characterizing Social Media Users Regarding the
Use of Controversial Terms for COVID-19 | 9,801 |
Given a social network $G$ and an integer $k$, the influence maximization (IM) problem asks for a seed set $S$ of $k$ nodes from $G$ to maximize the expected number of nodes influenced via a propagation model. The majority of the existing algorithms for the IM problem are developed only under the non-adaptive setting, i.e., where all $k$ seed nodes are selected in one batch without observing how they influence other users in real world. In this paper, we study the adaptive IM problem where the $k$ seed nodes are selected in batches of equal size $b$, such that the $i$-th batch is identified after the actual influence results of the former $i-1$ batches are observed. In this paper, we propose the first practical algorithm for the adaptive IM problem that could provide the worst-case approximation guarantee of $1-\mathrm{e}^{\rho_b(\varepsilon-1)}$, where $\rho_b=1-(1-1/b)^b$ and $\varepsilon \in (0, 1)$ is a user-specified parameter. In particular, we propose a general framework AdaptGreedy that could be instantiated by any existing non-adaptive IM algorithms with expected approximation guarantee. Our approach is based on a novel randomized policy that is applicable to the general adaptive stochastic maximization problem, which may be of independent interest. In addition, we propose a novel non-adaptive IM algorithm called EPIC which not only provides strong expected approximation guarantee, but also presents superior performance compared with the existing IM algorithms. Meanwhile, we clarify some existing misunderstandings in recent work and shed light on further study of the adaptive IM problem. We conduct experiments on real social networks to evaluate our proposed algorithms comprehensively, and the experimental results strongly corroborate the superiorities and effectiveness of our approach. | Efficient Approximation Algorithms for Adaptive Influence Maximization | 9,802 |
We know that reputation in organisational contexts can be understood as a valuable asset that requires diligent management. It directly affects how a firm is publicly perceived, and indirectly, how a firm will perform economically. The establishment of social media as ubiquitous tools of communication have changed how corporations manage their reputation. Particularly CEOs face novel responsibilities, as they deal with their personal image, which at the same time affects the reputation of their firm. Whereas CEO and corporate reputation have been researched isolated from each other, little is known about how a CEO's social media reputation management affects corporate reputation. This research in progress paper aims to emphasise this research gap with a literature review on the current status of reputation management and measurement by means of social media. We further propose a research design that combines sentiment analysis, frequency detection, and content analysis and discuss further research prospects. | You are now an Influencer! Measuring CEO Reputation in Social Media | 9,803 |
A growing number of people use social media to seek information or coordinate relief activities in times of crisis. Thus, social media is increasingly deployed by emergency agencies as well to reach more people in crisis situations. However, the large amount of available data on social media could also be used by emergency agencies to understand how they are perceived by the public and to improve their communication. In this study, we examined the Twitter communication about the German emergency agency "Johanniter-Unfall-Hilfe" by conducting a frequency, sentiment, social network and content analysis. The results reveal that a right-wing political cluster politically instrumentalised an incident related to this agency. Furthermore, some individuals used social media to express criticism. It can be concluded that the use of social media analytics in the daily routine of emergency management professionals can be beneficial for improving their social media communication strategy. | The Potential of Social Media Analytics for Improving Social Media
Communication of Emergency Agencies | 9,804 |
While convergence behaviour archetypes can explain behaviour of individuals who actively converge on and participate in crises, less is known about individuals who converge on an event and choose to remain passive i.e. "bystanders". Bystanders are important, because of their proximity to an event and their function as an "eye-witness". To investigate the role of bystanders in crisis communications we analysed Twitter communication generated from the 2016 Munich Shooting event. Our findings revealed the impassive convergence behaviour archetype could influence an event as a passive and rational "eye-witness", by gathering and sharing information close to where the event is occurring. | Convergence Behaviour of Bystanders: An Analysis of 2016 Munich Shooting
Twitter Crisis Communication | 9,805 |
In this paper, we present a type of media disorder which we call "`junk news bubbles" and which derives from the effort invested by online platforms and their users to identify and share contents with rising popularity. Such emphasis on trending matters, we claim, can have two detrimental effects on public debates: first, it shortens the amount of time available to discuss each matter; second it increases the ephemeral concentration of media attention. We provide a formal description of the dynamic of junk news bubbles, through a mathematical exploration the famous "public arenas model" developed by Hilgartner and Bosk in 1988. Our objective is to describe the dynamics of the junk news bubbles as precisely as possible to facilitate its further investigation with empirical data. | Junk News Bubbles: Modelling the Rise and Fall of Attention in Online
Arenas | 9,806 |
With rapid development of socio-economics, the task of discovering functional zones becomes critical to better understand the interactions between social activities and spatial locations. In this paper, we propose a framework to discover the real functional zones from the biased and extremely sparse Point of Interests (POIs). To cope with the bias and sparsity of POIs, the unbiased inner influences between spatial locations and human activities are introduced to learn a balanced and dense latent region representation. In addition, a spatial location based clustering method is also included to enrich the spatial information for latent region representation and enhance the region functionality consistency for the fine-grained region segmentation. Moreover, to properly annotate the various and fine-grained region functionalities, we estimate the functionality of the regions and rank them by the differences between the normalized POI distributions to reduce the inconsistency caused by the fine-grained segmentation. Thus, our whole framework is able to properly address the biased categories in sparse POI data and explore the true functional zones with a fine-grained level. To validate the proposed framework, a case study is evaluated by using very large real-world users GPS and POIs data from city of Raleigh. The results demonstrate that the proposed framework can better identify functional zones than the benchmarks, and, therefore, enhance understanding of urban structures with a finer granularity under practical conditions. | Discovering Urban Functional Zones from Biased and Sparse Points of
Interests and Sparse Human Activities | 9,807 |
The reports of Russian interference in the 2016 United States elections brought into the center of public attention concerns related to the ability of foreign actors to increase social discord and take advantage of personal user data for political purposes. It has raised questions regarding the ways and the extent to which data can be used to create psychographical profiles to determine what kind of advertisement would be most effective to persuade a particular person in a particular location for some political event. In this work, we study the political ads dataset collected by ProPublica, an American nonprofit newsroom, using a network of volunteers in the period before the 2018 US midterm elections. We first describe the main characteristics of the data and explore the user attributes including age, region, activity, and more, with a series of interactive illustrations. Furthermore, an important first step towards understating of political manipulation via user targeting is to identify politically related ads, yet manually checking ads is not feasible due to the scale of social media advertising. Consequently, we address the challenge of automatically classifying between political and non-political ads, demonstrating a significant improvement compared to the current text-based classifier used by ProPublica, and study whether the user targeting attributes are beneficial for this task. Our evaluation sheds light on questions, such as how user attributes are being used for political ads targeting and which users are more prone to be targeted with political ads. Overall, our contribution of data exploration, political ad classification and initial analysis of the targeting attributes, is designed to support future work with the ProPublica dataset, and specifically with regard to the understanding of political manipulation via user targeting. | Automatically Identifying Political Ads on Facebook: Towards
Understanding of Manipulation via User Targeting | 9,808 |
Designed for commercial decentralized applications (DApps), EOSIO is a Delegated Proof-of-Stake (DPoS) based blockchain system. It has overcome some shortages of the traditional blockchain systems like Bitcoin and Ethereum with its outstanding features (e.g., free for usage, high throughput and eco-friendly), and thus becomes one of the mainstream blockchain systems. Though there exist billions of transactions in EOSIO, the ecosystem of EOSIO is still relatively unexplored. To fill this gap, we conduct a systematic graph analysis on the early EOSIO by investigating its four major activities, namely account creation, account vote, money transfer and contract authorization. We obtain some novel observations via graph metric analysis, and our results reveal some abnormal phenomenons like voting gangs and sham transactions. | Exploring EOSIO via Graph Characterization | 9,809 |
Social media makes available vast amounts of data for various types of analyses. Cities have the opportunity to explore this new data source to study urban dynamics and complement traditional data used for urban planning. We investigate Untappd social media data in the context of urban planning in Curitiba, Brazil. We analyze the project to create a Craft Beer Street, recently announced by the municipality to promote local beers in Curitiba, in order to study the potential of exploring social media data to support the planning of this project. Our results indicate that social media data could have helped to guide the decision of the Beer Street creation and can potentially become a strategic urban planning tool. | On the Potential of Social Media Data in Urban Planning: Findings from
the Beer Street in Curitiba, Brazil | 9,810 |
Hashtags play a cardinal role in the classification of topics over social media. A sudden burst on the usage of certain hashtags, representing specific topics, give rise to trending topics. Trending topics can be immensely useful as it can spark a discussion on a particular subject. However, it can also be used to suppress an ongoing pivotal matter. This paper discusses how a significant economic crisis was covered by triggering a current trending topic. A case study on politics in India has been studied over the past two months. The analysis shows how the issue on inflation was attacked by the exercise of a new constitutional law over media. Hashtags used to discuss the topics were scrutinized, and we notice a steep ascend of the more recent topic and an eventual drop in discussions over the previous issue on inflation. Balancing the influence of hashtags on social media can be employed. Still, it can be equally challenging since some hashtags that represent the need of the hour topics should be given more importance, and evaluating such issues can be hard. | War of the Hashtags: Trending New Hashtags to Override Critical Topics
in Social Media | 9,811 |
We analyze the time series data of number of districts or cities in India that are affected by COVID-19 from March 01, 2020 to April 17, 2020. We study the data in the framework of time series network data. The networks are defined by using the geodesic distances of the districts or cities specified by the latitude and longitude coordinates. We particularly restrict our analysis to all but districts in the north-eastern part of India. Unlike recent studies on the projection of the number of people infected with SARS-CoV-2 in the near future, in this note, the emphasis is on understanding the dynamics of the spread of the virus across the districts of India. We perform spectral and structural analysis of the model networks by considering several measures, notably the spectral radius, the algebraic connectivity, the average clustering coefficient, the average path length and the structure of the communities. Furthermore, we study the overall expansion properties given by the number of districts or cities before and after lockdown. These studies show that lockdown has a significant impact on the spread of SARS-CoV-2 in districts or cities over long distances. However, this impact is only observed after approximately two weeks of lockdown. We speculate that this happened due to the insufficient number of tests for COVID-19 before the lockdown which could not stop the movement of people infected with the virus but not detected, over long distances. | Why lockdown : On the spread of SARS-CoV-2 in India, a network approach | 9,812 |
The novel coronavirus (COVID-19) pandemic outbreak is drastically shaping and reshaping many aspects of our life, with a huge impact on our social life. In this era of lockdown policies in most of the major cities around the world, we see a huge increase in people and professional engagement in social media. Social media is playing an important role in news propagation as well as keeping people in contact. At the same time, this source is both a blessing and a curse as the coronavirus infodemic has become a major concern, and is already a topic that needs special attention and further research. In this paper, we provide a multilingual coronavirus (COVID-19) Instagram dataset that we have been continuously collected since March 30, 2020. We are making our dataset available to the research community at Github. We believe that this contribution will help the community to better understand the dynamics behind this phenomenon in Instagram, as one of the major social media. This dataset could also help study the propagation of misinformation related to this outbreak. | A First Instagram Dataset on COVID-19 | 9,813 |
Using network analysis, psychologists have already found the nontrivial correlation between personality and social network structure. Despite the large amount of empirical studies, theoretical analysis and formal models behind such relationship are still lacking. To bridge this gap, we propose a generative model for friendship networks based on personality traits. To the best of our knowledge, this is the first work to explicitly introduce the concept of personality and friendship development into a social network model, with supporting insights from social and personality psychology. We use the model to investigate the effect of two personality traits, extraversion and agreeableness, on network structure. Analytical and simulation results both concur with recent empirical evidence that extraversion and agreeableness are positively correlated with degree. Using this model, we show that the effect of personality on friendship development can amount to the effect of personality on friendship network structure. | Modeling Friendship Networks among Agents with Personality Traits | 9,814 |
For years, many studies employed sentiment analysis to understand the reasoning behind people's choices and feelings, their communication styles, and the communities which they belong to. We argue that gaining more in-depth insight into moral dimensions coupled with sentiment analysis can potentially provide superior results. Understanding moral foundations can yield powerful results in terms of perceiving the intended meaning of the text data, as the concept of morality provides additional information on the unobservable characteristics of information processing and non-conscious cognitive processes. Therefore, we studied latent moral loadings of Syrian White Helmets-related tweets of Twitter users from April 1st, 2018 to April 30th, 2019. For the operationalization and quantification of moral rhetoric in tweets, we use Extended Moral Foundations Dictionary in which five psychological dimensions (Harm/Care, Fairness/Reciprocity, In-group/Loyalty, Authority/Respect and Purity/Sanctity) are considered. We show that people tend to share more tweets involving the virtue moral rhetoric than the tweets involving the vice rhetoric. We observe that the pattern of the moral rhetoric of tweets among these five dimensions are very similar during different time periods, while the strength of the five dimension is time-variant. Even though there is no significant difference between the use of Fairness/Reciprocity, In-group/Loyalty or Purity/Sanctity rhetoric, the less use of Harm/Care rhetoric is significant and remarkable. Besides, the strength of the moral rhetoric and the polarization in morality across people are mostly observed in tweets involving Harm/Care rhetoric despite the number of tweets involving the Harm/Care dimension is low. | Quantifying Latent Moral Foundations in Twitter Narratives: The Case of
the Syrian White Helmets Misinformation | 9,815 |
Due to the nature of the data and public interaction, twitter is becoming more and more useful to understand and model various events. The goal of CoronaVis is to use tweets as the information shared by the people to visualize topic modeling, study subjectivity, and to model the human emotions during the COVID-19 pandemic. The main objective is to explore the psychology and behavior of the societies at large which can assist in managing the economic and social crisis during the ongoing pandemic as well as the after-effects of it. The novel coronavirus (COVID-19) pandemic forced people to stay at home to reduce the spread of the virus by maintaining social distancing. However, social media is keeping people connected both locally and globally. People are sharing information (e.g. personal opinions, some facts, news, status, etc.) on social media platforms which can be helpful to understand the various public behavior such as emotions, sentiments, and mobility during the ongoing pandemic. In this work, we develop a live application to observe the tweets on COVID-19 generated from the USA. In this paper, we have generated various data analytics over a period of time to study the changes in topics, subjectivity, and human emotions. We also share a cleaned and processed dataset named CoronaVis Twitter dataset (focused on the United States) available to the research community at https://github.com/mykabir/COVID19. This will enable the community to find more useful insights and create different applications and models to fight with COVID-19 pandemic and future pandemics as well. | CoronaVis: A Real-time COVID-19 Tweets Data Analyzer and Data Repository | 9,816 |
The habilitation thesis presents two main directions: 1. Exploiting data from social networks (Twitter, Facebook, Flickr, etc.) - creating resources for text and image processing (classification, retrieval, credibility, diversification, etc.); 2. Creating applications with new technologies : augmented reality (eLearning, games, smart museums, gastronomy, etc.), virtual reality (eLearning and games), speech processing with Amazon Alexa (eLearning, entertainment, IoT, etc.). The work was validated with good results in evaluation campaigns like CLEF (Question Answering, Image CLEF, LifeCLEF, etc.), SemEval (Sentiment and Emotion in text, Anorexia, etc.). | Exploiting Social Networks. Technological Trends (Habilitation Thesis) | 9,817 |
The emergence of online enterprises spread across continents have given rise to the need for expert identification in this domain. Scenarios that includes the intention of the employer to find tacit expertise and knowledge of an employee that is not documented or self-disclosed has been addressed in this article. The existing reputation based approaches towards expertise ranking in enterprises utilize PageRank, normal distribution, and hidden Markov model for expertise ranking. These models suffer issue of negative referral, collusion, reputation inflation, and dynamism. The authors have however proposed a Bayesian approach utilizing beta probability distribution based reputation model for employee ranking in enterprises. The experimental results reveal improved performance compared to previous techniques in terms of Precision and Mean Average Error (MAE) with almost 7% improvement in precision on average for the three data sets. The proposed technique is able to differentiate categories of interactions in a dynamic context. The results reveal that the technique is independent of the rating pattern and density of data. | EER: Enterprise Expert Ranking using Employee Reputation | 9,818 |
With the development of the Internet, social media has become an important channel for posting disaster-related information. Analyzing attitudes hidden in these texts, known as sentiment analysis, is crucial for the government or relief agencies to improve disaster response efficiency, but it has not received sufficient attention. This paper aims to fill this gap by focusing on investigating attitudes towards disaster response and analyzing targeted relief supplies during disaster response. The contributions of this paper are fourfold. First, we propose several machine learning models for classifying public sentiment concerning disaster-related social media data. Second, we create a natural disaster dataset with sentiment labels, which contains nearly 50,00 Twitter data about different natural disasters in the United States (e.g., a tornado in 2011, a hurricane named Sandy in 2012, a series of floods in 2013, a hurricane named Matthew in 2016, a blizzard in 2016, a hurricane named Harvey in 2017, a hurricane named Michael in 2018, a series of wildfires in 2018, and a hurricane named Dorian in 2019). We are making our dataset available to the research community: https://github.com/Dong-UTIL/Natural-Hazards-Twitter-Dataset. It is our hope that our contribution will enable the study of sentiment analysis in disaster response. Third, we focus on extracting public attitudes and analyzing the essential needs (e.g., food, housing, transportation, and medical supplies) for the public during disaster response, instead of merely targeting on studying positive or negative attitudes of the public to natural disasters. Fourth, we conduct this research from two different dimensions for a comprehensive understanding of public opinion on disaster response, since disparate hazards caused by different types of natural disasters. | Natural Hazards Twitter Dataset | 9,819 |
In recent months, COVID-19 has become a global pandemic and had a huge impact on the world. People under different conditions have very different attitudes toward the epidemic. Due to the real-time and large-scale nature of social media, we can continuously obtain a massive amount of public opinion information related to the epidemic from social media. In particular, researchers may ask questions such as "how is the public reacting to COVID-19 in China during different stages of the pandemic?", "what factors affect the public opinion orientation in China?", and so on. To answer such questions, we analyze the pandemic related public opinion information on Weibo, China's largest social media platform. Specifically, we have first collected a large amount of COVID-19-related public opinion microblogs. We then use a sentiment classifier to recognize and analyze different groups of users' opinions. In the collected sentiment orientated microblogs, we try to track the public opinion through different stages of the COVID-19 pandemic. Furthermore, we analyze more key factors that might have an impact on the public opinion of COVID-19 (e.g., users in different provinces or users with different education levels). Empirical results show that the public opinions vary along with the key factors of COVID-19. Furthermore, we analyze the public attitudes on different public-concerning topics, such as staying at home and quarantine. | Tracking Public Opinion in China through Various Stages of the COVID-19
Pandemic | 9,820 |
Finding dense components in graphs is of great importance in analyzing the structure of networks. Popular and computationally feasible frameworks for discovering dense subgraphs are core and truss decompositions. Recently, Sariyuce et al. introduced nucleus decomposition, a generalization which uses higher-order structures and can reveal interesting subgraphs that can be missed by core and truss decompositions. In this paper, we present nucleus decomposition in probabilistic graphs. We study the most interesting case of nucleus decomposition, k-(3,4)-nucleus, which asks for maximal subgraphs where each triangle is contained in k 4-cliques. The major questions we address are: How to define meaningfully nucleus decomposition in probabilistic graphs? How hard is computing nucleus decomposition in probabilistic graphs? Can we devise efficient algorithms for exact or approximate nucleus decomposition in large graphs? We present three natural definitions of nucleus decomposition in probabilistic graphs: local, global, and weakly-global. We show that the local version is in PTIME, whereas global and weakly-global are #P-hard and NP-hard, respectively. We present an efficient and exact dynamic programming approach for the local case and furthermore, present statistical approximations that can scale to large datasets without much loss of accuracy. For global and weakly-global decompositions, we complement our intractability results by proposing efficient algorithms that give approximate solutions based on search space pruning and Monte-Carlo sampling. Our extensive experimental results show the scalability and efficiency of our algorithms. Compared to probabilistic core and truss decompositions, nucleus decomposition significantly outperforms in terms of density and clustering metrics. | Nucleus Decomposition in Probabilistic Graphs: Hardness and Algorithms | 9,821 |
Collaborative consensus-finding is an integral element of many Web services and greatly determines the quality of information, content, and products that are available through the Web. That also means that the dynamics of democratic consensus-finding strengthen collective resilience against potential threats that attempt to degrade information, content, and products and affect Web data, users, behaviors, and even beyond as well as offline life. Even on Web platforms that are open to all, the influence of some first mover authors may shape future discussion and collaboration, which is comparable to academic citation networks for instance. In a social coding network such as GitHub, activities of a set of users can have influence on other users who can get interested in further actions, possibly contributing to a new project together with influential users. In this paper, we analyze the effect of contribution activities on gaining influence in this and comparable networks that provide users the functionality and aims for reaching collaborative goals on the Web.For this purpose, we present an empirical approach to identify the top influential users by using network features and contribution characteristics, which we find in existing and newly collected data set. We find that early adopter dynamics exist in the GitHub community, where early adopters have more followers in the end as expected. However, we also see counterexamples that arise due to the social networking behavior of late adopters, and due to the aging effect of older repositories and users. We publicly share the source code and the data sets for reproducing our paper. | Does the First Mover Advantage Exist on GitHub? | 9,822 |
Online social network has been one of the most important platforms for viral marketing. Most of existing researches about diffusion of adoptions of new products on networks are about one diffusion. That is, only one piece of information about the product is spread on the network. However, in fact, one product may have multiple features and the information about different features may spread independently in social network. When a user would like to purchase the product, he would consider all of the features of the product comprehensively not just consider one. Based on this, we propose a novel problem, multi-feature budgeted profit maximization (MBPM) problem, which first considers budgeted profit maximization under multiple features propagation of one product. Given a social network with each node having an activation cost and a profit, MBPM problem seeks for a seed set with expected cost no more than the budget to make the total expected profit as large as possible. We consider MBPM problem under the adaptive setting, where seeds are chosen iteratively and next seed is selected according to current diffusion results. We study adaptive MBPM problem under two models, oracle model and noise model. The oracle model assumes conditional expected marginal profit of any node could be obtained in O(1) time and a (1-1/e) expected approximation policy is proposed. Under the noise model, we estimate conditional expected marginal profit of a node by modifying the EPIC algorithm and propose an efficient policy, which could return a (1-exp({\epsilon}-1)) expected approximation ratio. Several experiments are conducted on six realistic datasets to compare our proposed policies with their corresponding non-adaptive algorithms and some heuristic adaptive policies. Experimental results show efficiencies and superiorities of our policies. | Adaptive Multi-Feature Budgeted Profit Maximization in Social Networks | 9,823 |
As the novel coronavirus spread globally, a growing public panic was expressed over the internet. We examine the public discussion concerning COVID-19 on Twitter. We use a dataset of 67 million tweets from 12 million users collected between January 29, 2020 and March 4, 2020. We categorize users based on their home countries, social identities, and political orientation. We find that news media, government officials, and individual news reporters posted a majority of influential tweets, while the most influential ones are still written by regular users. Tweets mentioning "fake news" URLs and disinformation story-lines are also more likely to be spread by regular users. Unlike real news and normal tweets, tweets containing URLs pointing to "fake news" sites are most likely to be retweeted within the source country and so are less likely to spread internationally. | Disinformation and Misinformation on Twitter during the Novel
Coronavirus Outbreak | 9,824 |
Anecdotally, social connections made in university have life-long impact. Yet knowledge of social networks formed in college remains episodic, due in large part to the difficulty and expense involved in collecting a suitable dataset for comprehensive analysis. To advance and systematize insight into college social networks, we describe a dataset of the largest online social network platform used by college students in the United States. We combine de-identified and aggregated Facebook data with College Scorecard data, campus-level information provided by U.S. Department of Education, to produce a dataset covering the 2008-2015 entry year cohorts for 1,159 U.S. colleges and universities, spanning 7.6 million students. To perform the difficult task of comparing these networks of different sizes we develop a new methodology. We compute features over sampled ego-graphs, train binary classifiers for every pair of graphs, and operationalize distance between graphs as predictive accuracy. Social networks of different year cohorts at the same school are structurally more similar to one another than to cohorts at other schools. Networks from similar schools have similar structures, with the public/private and graduation rate dimensions being the most distinguishable. We also relate school types to specific outcomes. For example, students at private schools have larger networks that are more clustered and with higher homophily by year. Our findings may help illuminate the role that colleges play in shaping social networks which partly persist throughout people's lives. | The Structure of U.S. College Networks on Facebook | 9,825 |
In a scenario where there is no vaccine for COVID-19, non-pharmaceutical interventions are necessary to contain the spread of the virus and the collapse of the health system in the affected regions. One of these measures is social distancing, which aims to reduce interactions in the community by closing public and private establishments that involve crowds of people. The lockdown presupposes a drastic reduction in community interactions, representing a more extreme measure of social distancing. Based on geolocation data provided by Google for six categories of physical spaces, this article identifies the variations in the circulation of people in South America for different types of social distancing measures adopted during the COVID-19 pandemic. In this study, population mobility trends for a group of countries between February 15, 2020 and May 16, 2020 were analyzed. To summarize these trends in a single metric, a general circulation index was created, and to identify regional mobility patterns, descriptive analyzes of spatial autocorrelation (global and local Moran index) were used. The first hypothesis of this study is that countries with a lockdown decree can achieve greater success in reducing the mobility of the population, and the second hypothesis is that Argentina, Brazil and Colombia have regional mobility patterns. The first hypothesis was partially confirmed (considering 10 countries in South America), and the results obtained in the spatial analyzes confirmed the second hypothesis. In general, the observed data shows that less rigid lockdown or social distancing measures are necessary, however, they are not sufficient to achieve a significant reduction in the circulation of people during the pandemic. | Medidas de distanciamento social e mobilidade na América do Sul
durante a pandemia por COVID-19: Condições necessárias e suficientes? | 9,826 |
Modeling user engagement dynamics on social media has compelling applications in user-persona detection and political discourse mining. Most existing approaches depend heavily on knowledge of the underlying user network. However, a large number of discussions happen on platforms that either lack any reliable social network or reveal only partially the inter-user ties (Reddit, Stackoverflow). Many approaches require observing a discussion for some considerable period before they can make useful predictions. In real-time streaming scenarios, observations incur costs. Lastly, most models do not capture complex interactions between exogenous events (such as news articles published externally) and in-network effects (such as follow-up discussions on Reddit) to determine engagement levels. To address the three limitations noted above, we propose a novel framework, ChatterNet, which, to our knowledge, is the first that can model and predict user engagement without considering the underlying user network. Given streams of timestamped news articles and discussions, the task is to observe the streams for a short period leading up to a time horizon, then predict chatter: the volume of discussions through a specified period after the horizon. ChatterNet processes text from news and discussions using a novel time-evolving recurrent network architecture that captures both temporal properties within news and discussions, as well as the influence of news on discussions. We report on extensive experiments using a two-month-long discussion corpus of Reddit, and a contemporaneous corpus of online news articles from the Common Crawl. ChatterNet shows considerable improvements beyond recent state-of-the-art models of engagement prediction. Detailed studies controlling observation and prediction windows, over 43 different subreddits, yield further useful insights. | Deep Exogenous and Endogenous Influence Combination for Social Chatter
Intensity Prediction | 9,827 |
During the COVID-19 pandemic, multiple aspects of human life were subjected to unprecedented changes, globally. In Sri Lanka, a developing country located in South Asia, it was possible to observe a range of events that arose due to the influence of the COVID-19 virus outbreak. Thus, the people of Sri Lanka used Social Media to voice their opinions regarding such events and those involved in them, enabling the ideal avenue to explore the social perception. However, the outcome of such actions was at certain times detrimental. This study was conducted as an attempt to identify the reasons for such instances as well as to identify the behaviours of the Sri Lankan populace during such a crisis event. To support this study, observations, as well as data of related posts from a sample of 50 sources, were manually collected from the most popular social media platform in Sri Lanka, Facebook. The posts considered spanned until approximately a month after the initial major virus outbreak in the country and contained content that even vaguely related to the virus. Utilising such data, various forms of analyses such as topic significance and topic co-occurrences were conducted. The findings highlight, while there can be social detrimental ideas shared, the majority of the posts point constructive and positive thoughts suggesting the successful influence from the cultural and social values Sri Lanka society promotes throughout. | Exploratory Analysis of a Social Media Network in Sri Lanka during the
COVID-19 Virus Outbreak | 9,828 |
Attention to informal communication networks within public organizations has grown in recent decades. While research has documented the role of individual cognition and social structure in understanding information search in organizations, this article emphasizes the importance of formal hierarchy. We argue that the structural attributes of bureaucracies are too important to be neglected when modeling knowledge flows in public organizations. Empirically, we examine interpersonal information seeking patterns among 143 employees in a small city government, using exponential random graph modeling (ERGM). The results suggest that formal structure strongly shapes information search patterns while accounting for social network variables and individual level perceptions. We find that formal status, permission pathways, and departmental membership all affect the information search of employees. Understanding the effects of organizational structure on information search networks will offer opportunities to improve information flows in public organizations via design choices. | Formal Hierarchies and Informal Networks: How Organizational Structure
Shapes Information Search in Local Government | 9,829 |
As the COVID-19 pandemic swept over the world, people discussed facts, expressed opinions, and shared sentiments on social media. Since the reaction to COVID-19 in different locations may be tied to local cases, government regulations, healthcare resources and socioeconomic factors, we curated a large geo-tagged Twitter dataset and performed exploratory analysis by location. Specifically, we collected 650,563 unique geo-tagged tweets across the United States (50 states and Washington, D.C.) covering the date range from January 25 to May 10, 2020. Tweet locations enabled us to conduct region-specific studies such as tweeting volumes and sentiment, sometimes in response to local regulations and reported COVID-19 cases. During this period, many people started working from home. The gap between workdays and weekends in hourly tweet volumes inspired us to propose algorithms to estimate work engagement during the COVID-19 crisis. This paper also summarizes themes and topics of tweets in our dataset using both social media exclusive tools (i.e., #hashtags, @mentions) and the latent Dirichlet allocation model. We welcome requests for data sharing and conversations for more insights. Dataset link: http://covid19research.site/geo-tagged_twitter_datasets/ | Is Working From Home The New Norm? An Observational Study Based on a
Large Geo-tagged COVID-19 Twitter Dataset | 9,830 |
Unbiased shuffling algorithms, such as the Fisher-Yates shuffle, are often used for shuffle play in media players. These algorithms treat all items being shuffled equally regardless of how similar the items are to each other. While this may be desirable for many applications, this is problematic for shuffle play due to the clustering illusion, which is the tendency for humans to erroneously consider 'streaks' or 'clusters' that may arise from samplings of random distributions to be non-random. This thesis attempts to address this issue with a family of biased shuffling algorithms called cluster diffusing (CD) shuffles which are based on disordered hyperuniform systems such as the distribution of cone cells in chicken eyes, the energy levels of heavy atomic nuclei, the eigenvalue distributions of various types of random matrices, and many others which appear in a variety of biological, chemical, physical, and mathematical settings. These systems suppress density fluctuations at large length scales without appearing ordered like lattices, making them ideal for shuffle play. The CD shuffles range from a random matrix based shuffle which takes $O(n^3)$ time and $O(n^2)$ space to more efficient approximations which take $O(n)$ time and $O(n)$ space. | Cluster Diffusing Shuffles | 9,831 |
It is a widely accepted fact that state-sponsored Twitter accounts operated during the 2016 US presidential election, spreading millions of tweets with misinformation and inflammatory political content. Whether these social media campaigns of the so-called "troll" accounts were able to manipulate public opinion is still in question. Here, we quantify the influence of troll accounts on Twitter by analyzing 152.5 million tweets (by 9.9 million users) from that period. The data contain original tweets from 822 troll accounts identified as such by Twitter itself. We construct and analyse a very large interaction graph of 9.3 million nodes and 169.9 million edges using graph analysis techniques, along with a game-theoretic centrality measure. Then, we quantify the influence of all Twitter accounts on the overall information exchange as is defined by the retweet cascades. We provide a global influence ranking of all Twitter accounts and we find that one troll account appears in the top-100 and four in the top-1000. This combined with other findings presented in this paper constitute evidence that the driving force of virality and influence in the network came from regular users - users who have not been classified as trolls by Twitter. On the other hand, we find that on average, troll accounts were tens of times more influential than regular users were. Moreover, 23% and 22% of regular accounts in the top-100 and top-1000 respectively, have now been suspended by Twitter. This raises questions about their authenticity and practices during the 2016 US presidential election. | Did State-sponsored Trolls Shape the 2016 US Presidential Election
Discourse? Quantifying Influence on Twitter | 9,832 |
Many prediction problems on social networks, from recommendations to anomaly detection, can be approached by modeling network data as a sequence of relational events and then leveraging the resulting model for prediction. Conditional logit models of discrete choice are a natural approach to modeling relational events as "choices" in a framework that envelops and extends many long-studied models of network formation. The conditional logit model is simplistic, but it is particularly attractive because it allows for efficient consistent likelihood maximization via negative sampling, something that isn't true for mixed logit and many other richer models. The value of negative sampling is particularly pronounced because choice sets in relational data are often enormous. Given the importance of negative sampling, in this work we introduce a model simplification technique for mixed logit models that we call "de-mixing", whereby standard mixture models of network formation---particularly models that mix local and global link formation---are reformulated to operate their modes over disjoint choice sets. This reformulation reduces mixed logit models to conditional logit models, opening the door to negative sampling while also circumventing other standard challenges with maximizing mixture model likelihoods. To further improve scalability, we also study importance sampling for more efficiently selecting negative samples, finding that it can greatly speed up inference in both standard and de-mixed models. Together, these steps make it possible to much more realistically model network formation in very large graphs. We illustrate the relative gains of our improvements on synthetic datasets with known ground truth as well as a large-scale dataset of public transactions on the Venmo platform. | Scaling Choice Models of Relational Social Data | 9,833 |
Hundreds of millions of Chinese people have become social network users in recent years, and aligning the accounts of common Chinese users across multiple social networks is valuable to many inter-network applications, e.g., cross-network recommendation, cross-network link prediction. Many methods have explored the proper ways of utilizing account name information into aligning the common English users' accounts. However, how to properly utilize the account name information when aligning the Chinese user accounts remains to be detailedly studied. In this paper, we firstly discuss the available naming behavioral models as well as the related features for different types of Chinese account name matchings. Secondly, we propose the framework of Multi-View Cross-Network User Alignment (MCUA) method, which uses a multi-view framework to creatively integrate different models to deal with different types of Chinese account name matchings, and can consider all of the studied features when aligning the Chinese user accounts. Finally, we conduct experiments to prove that MCUA can outperform many existing methods on aligning Chinese user accounts between Sina Weibo and Twitter. Besides, we also study the best learning models and the top-k valuable features of different types of name matchings for MCUA over our experimental data sets. | A Multi-View Approach Based on Naming Behavioral Modeling for Aligning
Chinese User Accounts across Multiple Networks | 9,834 |
Many real-world systems involve higher-order interactions and thus demand complex models such as hypergraphs. For instance, a research article could have multiple collaborating authors, and therefore the co-authorship network is best represented as a hypergraph. In this work, we focus on the problem of hyperedge prediction. This problem has immense applications in multiple domains, such as predicting new collaborations in social networks, discovering new chemical reactions in metabolic networks, etc. Despite having significant importance, the problem of hyperedge prediction hasn't received adequate attention, mainly because of its inherent complexity. In a graph with $n$ nodes the number of potential edges is $\mathcal{O}(n^{2})$, whereas in a hypergraph, the number of potential hyperedges is $\mathcal{O}(2^{n})$. To avoid searching through such a huge space, current methods restrain the original problem in the following two ways. One class of algorithms assume the hypergraphs to be $k$-uniform. However, many real-world systems are not confined only to have interactions involving $k$ components. Thus, these algorithms are not suitable for many real-world applications. The second class of algorithms requires a candidate set of hyperedges from which the potential hyperedges are chosen. In the absence of domain knowledge, the candidate set can have $\mathcal{O}(2^{n})$ possible hyperedges, which makes this problem intractable. We propose HPRA - Hyperedge Prediction using Resource Allocation, the first of its kind algorithm, which overcomes these issues and predicts hyperedges of any cardinality without using any candidate hyperedge set. HPRA is a similarity-based method working on the principles of the resource allocation process. In addition to recovering missing hyperedges, we demonstrate that HPRA can predict future hyperedges in a wide range of hypergraphs. | HPRA: Hyperedge Prediction using Resource Allocation | 9,835 |
Digital traces are often used as a substitute for survey data. However, it is unclear whether and how digital traces actually correspond to the survey-based traits they purport to measure. This paper examines correlations between self-reports and digital trace proxies of depression, anxiety, mood, social integration and sleep among high school students. The study is based on a small but rich multilayer data set (N = 144). The data set contains mood and sleep measures, assessed daily over a 4-month period, along with survey measures at two points in time and information about online activity from VK, the most popular social networking site in Russia. Our analysis indicates that 1) the sentiments expressed in social media posts are correlated with depression; namely, adolescents with more severe symptoms of depression write more negative posts, 2) late-night posting indicates less sleep and poorer sleep quality, and 3) students who were nominated less often as somebody's friend in the survey have fewer friends on VK and their posts receive fewer "likes." However, these correlations are generally weak. These results demonstrate that digital traces can serve as useful supplements to, rather than substitutes for, survey data in studies on adolescents' well-being. These estimates of correlations between survey and digital trace data could provide useful guidelines for future research on the topic. | Measuring Adolescents' Well-being: Correspondence of Naive Digital
Traces to Survey Data | 9,836 |
The increased use of online social networks for the dissemination of information comes with the misuse of the internet for cyberbullying, cybercrime, spam, vandalism, amongst other things. To proactively identify abuse in the networks, we propose a model to identify abusive posts by crowdsourcing. The crowdsourcing part of the detection mechanism is implemented implicitly, by simply observing the natural interaction between users encountering the messages. We explore the node-to-node spread of information on Twitter and propose a model that predicts the abuse level (abusive, hate, spam, normal) associated with the tweet by observing the attributes of the message, along with those of the users interacting with it. We demonstrate that the difference in users' interactions with abusive posts can be leveraged in identifying posts of varying abuse levels. | Implicit Crowdsourcing for Identifying Abusive Behavior in Online Social
Networks | 9,837 |
Accurately analyzing graph properties of social networks is a challenging task because of access limitations to the graph data. To address this challenge, several algorithms to obtain unbiased estimates of properties from few samples via a random walk have been studied. However, existing algorithms do not consider private nodes who hide their neighbors in real social networks, leading to some practical problems. Here we design random walk-based algorithms to accurately estimate properties without any problems caused by private nodes. First, we design a random walk-based sampling algorithm that comprises the neighbor selection to obtain samples having the Markov property and the calculation of weights for each sample to correct the sampling bias. Further, for two graph property estimators, we propose the weighting methods to reduce not only the sampling bias but also estimation errors due to private nodes. The proposed algorithms improve the estimation accuracy of the existing algorithms by up to 92.6% on real-world datasets. | Estimating Properties of Social Networks via Random Walk considering
Private Nodes | 9,838 |
COVID-19 has become one of the most widely talked about topics on social media. This research characterizes risk communication patterns by analyzing the public discourse on the novel coronavirus from four Asian countries: South Korea, Iran, Vietnam, and India, which suffered the outbreak to different degrees. The temporal analysis shows that the official epidemic phases issued by governments do not match well with the online attention on COVID-19. This finding calls for a need to analyze the public discourse by new measures, such as topical dynamics. Here, we propose an automatic method to detect topical phase transitions and compare similarities in major topics across these countries over time. We examine the time lag difference between social media attention and confirmed patient counts. For dynamics, we find an inverse relationship between the tweet count and topical diversity. | Risk Communication in Asian Countries: COVID-19 Discourse on Twitter | 9,839 |
The use of social media (SM) data has emerged as a promising tool for the assessment of cultural ecosystem services (CES). Most studies have focused on the use of single SM platforms and on the analysis of photo content to assess the demand for CES. Here, we introduce a novel methodology for the assessment of CES using SM data through the application of graph theory network analyses (GTNA) on hashtags associated to SM posts and compare it to photo content analysis. We applied the proposed methodology on two SM platforms, Instagram and Twitter, on three worldwide known case study areas, namely Great Barrier Reef, Galapagos Islands and Easter Island. Our results indicate that the analysis of hashtags through graph theory offers similar capabilities to photo content analysis in the assessment of CES provision and the identification of CES providers. More importantly, GTNA provides greater capabilities at identifying relational values and eudaimonic aspects associated to nature, elusive aspects for photo content analysis. In addition, GTNA contributes to the reduction of the interpreter's bias associated to photo content analyses, since GTNA is based on the tags provided by the users themselves. The study also highlights the importance of considering data from different social media platforms, as the type of users and the information offered by these platforms can show different CES attributes. The ease of application and short computing processing times involved in the application of GTNA makes it a cost-effective method with the potential of being applied to large geographical scales. | Using graph theory and social media data to assess cultural ecosystem
services in coastal areas: Method development and application | 9,840 |
We propose a novel probabilistic framework to model continuous-time interaction events data. Our goal is to infer the \emph{implicit} community structure underlying the temporal interactions among entities, and also to exploit how the community structure influences the interaction dynamics among these nodes. To this end, we model the reciprocating interactions between individuals using mutually-exciting Hawkes processes. The base rate of the Hawkes process for each pair of individuals is built upon the latent representations inferred using the hierarchical gamma process edge partition model (HGaP-EPM). In particular, our model allows the interaction dynamics between each pair of individuals to be modulated by their respective affiliated communities. Moreover, our model can flexibly incorporate the auxiliary individuals' attributes, or covariates associated with interaction events. Efficient Gibbs sampling and Expectation-Maximization algorithms are developed to perform inference via P\'olya-Gamma data augmentation strategy. Experimental results on real-world datasets demonstrate that our model not only achieves competitive performance for temporal link prediction compared with state-of-the-art methods, but also discovers interpretable latent structure behind the observed temporal interactions. | The Hawkes Edge Partition Model for Continuous-time Event-based Temporal
Networks | 9,841 |
Wikipedia is a major source of information providing a large variety of content online, trusted by readers from around the world. Readers go to Wikipedia to get reliable information about different subjects, one of the most popular being living people, and especially politicians. While a lot is known about the general usage and information consumption on Wikipedia, less is known about the life-cycle and quality of Wikipedia articles in the context of politics. The aim of this study is to quantify and qualify content production and consumption for articles about politicians, with a specific focus on UK Members of Parliament (MPs). First, we analyze spatio-temporal patterns of readers' and editors' engagement with MPs' Wikipedia pages, finding huge peaks of attention during election times, related to signs of engagement on other social media (e.g. Twitter). Second, we quantify editors' polarisation and find that most editors specialize in a specific party and choose specific news outlets as references. Finally we observe that the average citation quality is pretty high, with statements on 'Early life and career' missing citations most often (18%). | Wikipedia and Westminster: Quality and Dynamics of Wikipedia Pages about
UK Politicians | 9,842 |
Online users discuss and converse about all sorts of topics on social networks. Facebook, Twitter, Reddit are among many other networks where users can have this freedom of information sharing. The abundance of information shared over these networks makes them an attractive area for investigating all aspects of human behavior on information dissemination. Among the many interesting behaviors, controversiality within social cascades is of high interest to us. It is known that controversiality is bound to happen within online discussions. The online social network platform Reddit has the feature to tag comments as controversial if the users have mixed opinions about that comment. The difference between this study and previous attempts at understanding controversiality on social networks is that we do not investigate topics that are known to be controversial. On the contrary, we examine typical cascades with comments that the readers deemed to be controversial concerning the matter discussed. This work asks whether controversially initiated information cascades have distinctive characteristics than those not controversial in Reddit. We used data collected from Reddit consisting of around 17 million posts and their corresponding comments related to cybersecurity issues to answer these emerging questions. From the comparative analyses conducted, controversial content travels faster and further from its origin. Understanding this phenomenon would shed light on how users or organization might use it to their help in controlling and spreading a specific beneficiary message. | Controversial information spreads faster and further in Reddit | 9,843 |
Online social networks have become incredibly popular in recent years, which prompts an increasing number of companies to promote their brands and products through social media. This paper presents an approach for identifying influential nodes in online social network for brand communication. We first construct a weighted network model for the users and their relationships extracted from the brand-related contents. We quantitatively measure the individual value of the nodes in the community from both the network structure and brand engagement aspects. Then an algorithm for identifying the influential nodes from the virtual brand community is proposed. The algorithm evaluates the importance of the nodes by their individual values as well as the individual values of their surrounding nodes. We extract and construct a virtual brand community for a specific brand from a real-life online social network as the dataset and empirically evaluate the proposed approach. The experimental results have shown that the proposed approach was able to identify influential nodes in online social network. We can get an identification result with higher ratio of verified users and user coverage by using the approach. | Identify Influential Nodes in Online Social Network for Brand
Communication | 9,844 |
Link prediction is an important task in social network analysis. There are different characteristics (features) in a social network that can be used for link prediction. In this paper, we evaluate the effectiveness of aggregated features and topological features in link prediction using supervised learning. The aggregated features, in a social network, are some aggregation functions of the attributes of the nodes. Topological features describe the topology or structure of a social network, and its underlying graph. We evaluated the effectiveness of these features by measuring the performance of different supervised machine learning methods. Specifically, we selected five well-known supervised methods including J48 decision tree, multi-layer perceptron (MLP), support vector machine (SVM), logistic regression and Naive Bayes (NB). We measured the performance of these five methods with different sets of features of the DBLP Dataset. Our results indicate that the combination of aggregated and topological features generates the best performance. For evaluation purposes, we used accuracy, area under the ROC curve (AUC) and F-Measure. Our selected features can be used for the analysis of almost any social network. This is because these features provide the important characteristics of the underlying graph of the social networks. The significance of our work is that the selected features can be very effective in the analysis of big social networks. In such networks we usually deal with big data sets, with millions or billions of instances. Using fewer, but more effective, features can help us for the analysis of big social networks. | Link Prediction Using Supervised Machine Learning based on Aggregated
and Topological Features | 9,845 |
There is an extensive body of research on Social Network Analysis (SNA) based on the email archive. The network used in the analysis is generally extracted either by capturing the email communication in From, To, Cc and Bcc email header fields or by the entities contained in the email message. In the latter case, the entities could be, for instance, the bag of words, url's, names, phones, etc. It could also include the textual content of attachments, for instance Microsoft Word documents, excel spreadsheets, or Adobe pdfs. The nodes in this network represent users and entities. The edges represent communication between users and relations to the entities. We suggest taking a different approach to the network extraction and use attachments shared between users as the edges. The motivation for this is two-fold. First, attachments represent the "intimacy" manifestation of the relation's strength. Second, the statistical analysis of private email archives that we collected and Enron email corpus shows that the attachments contribute in average around 80-90% to the archive's disk-space usage, which means that most of the data is presently ignored in the SNA of email archives. Consequently, we hypothesize that this approach might provide more insight into the social structure of the email archive. We extract the communication and shared attachments networks from Enron email corpus. We further analyze degree, betweenness, closeness, and eigenvector centrality measures in both networks and review the differences and what can be learned from them. We use nearest neighbor algorithm to generate similarity groups for five Enron employees. The groups are consistent with Enron's organizational chart, which validates our approach. | An Email Attachment is Worth a Thousand Words, or Is It? | 9,846 |
Property graphs can be used to represent heterogeneous networks with labeled (attributed) vertices and edges. Given a property graph, simulating another graph with same or greater size with the same statistical properties with respect to the labels and connectivity is critical for privacy preservation and benchmarking purposes. In this work we tackle the problem of capturing the statistical dependence of the edge connectivity on the vertex labels and using the same distribution to regenerate property graphs of the same or expanded size in a scalable manner. However, accurate simulation becomes a challenge when the attributes do not completely explain the network structure. We propose the Property Graph Model (PGM) approach that uses a label augmentation strategy to mitigate the problem and preserve the vertex label and the edge connectivity distributions as well as their correlation, while also replicating the degree distribution. Our proposed algorithm is scalable with a linear complexity in the number of edges in the target graph. We illustrate the efficacy of the PGM approach in regenerating and expanding the datasets by leveraging two distinct illustrations. Our open-source implementation is available on GitHub. | When Labels Fall Short: Property Graph Simulation via Blending of
Network Structure and Vertex Attributes | 9,847 |
Behavioral economics show us that emotions play an important role in individual behavior and decision-making. Does this also affect collective decision making in a community? Here we investigate whether the community sentiment energy of a topic is related to the spreading popularity of the topic. To compute the community sentiment energy of a topic, we first analyze the sentiment of a user on the key phrases of the topic based on the recent tweets of the user. Then we compute the total sentiment energy of all users in the community on the topic based on the Markov Random Field (MRF) model and graph entropy model. Experiments on two communities find the linear correlation between the community sentiment energy and the real spreading popularity of topics. Based on the finding, we proposed two models to predict the popularity of topics. Experimental results show the effectiveness of the two models and the helpful of sentiment in predicting the popularity of topics. Experiments also show that community sentiment affects collective decision making of spreading a topic or not in the community. | Predicting the Popularity of Topics based on User Sentiment in
Microblogging Websites | 9,848 |
This paper addresses the issue of suppressing a rumor using the truth in a cost-effective way. First, an individual-level dynamical model capturing the rumor-truth mixed spreading processes is proposed. On this basis, the cost-effective rumor-containing problem is modeled as an optimization problem. Extensive experiments show that finding a cost-effective rumor-containing strategy boils down to enhancing the first truth-spreading rate until the cost effectiveness of the rumor-containing strategy reaches the first turning point. This finding greatly reduces the time spent for solving the optimization problem. The influences of different factors on the optimal cost effectiveness of a rumor-containing strategy are examined through computer simulations. We believe our findings help suppress rumors in a cost-effective way. To our knowledge, this is the first time the rumor-containing problem is treated this way. | A cost-effective rumor-containing strategy | 9,849 |
With the growing shift towards news consumption primarily through social media sites like Twitter, most of the traditional as well as new-age media houses are promoting their news stories by tweeting about them. The competition for user attention in such mediums has led many media houses to use catchy sensational form of tweets to attract more users - a process known as clickbaiting. In this work, using an extensive dataset collected from Twitter, we analyze the social sharing patterns of clickbait and non-clickbait tweets to determine the organic reach of such tweets. We also attempt to study the sections of Twitter users who actively engage themselves in following clickbait and non-clickbait tweets. Comparing the advent of clickbaits with the rise of tabloidization of news, we bring out several important insights regarding the news consumers as well as the media organizations promoting news stories on Twitter. | Tabloids in the Era of Social Media? Understanding the Production and
Consumption of Clickbaits in Twitter | 9,850 |
Millions of users on the Internet discuss a variety of topics on Question-and-Answer (Q&A) instances. However, not all instances and topics receive the same amount of attention, as some thrive and achieve self-sustaining levels of activity, while others fail to attract users and either never grow beyond being a small niche community or become inactive. Hence, it is imperative to not only better understand but also to distill deciding factors and rules that define and govern sustainable Q&A instances. We aim to empower community managers with quantitative methods for them to better understand, control and foster their communities, and thus contribute to making the Web a more efficient place to exchange information. To that end, we extract, model and cluster user activity-based time series from $50$ randomly selected Q&A instances from the Stack Exchange network to characterize user behavior. We find four distinct types of user activity temporal patterns, which vary primarily according to the users' activity frequency. Finally, by breaking down total activity in our 50 Q&A instances by the previously identified user activity profiles, we classify those 50 Q&A instances into three different activity profiles. Our parsimonious categorization of Q&A instances aligns with the stage of development and maturity of the underlying communities, and can potentially help operators of such instances: We not only quantitatively assess progress of Q&A instances, but we also derive practical implications for optimizing Q&A community building efforts, as we e.g. recommend which user types to focus on at different developmental stages of a Q&A community. | Activity Archetypes in Question-and-Answer (Q&A) Websites - A Study of
50 Stack Exchange Instances | 9,851 |
The problem of unicity and reidentifiability of records in large-scale databases has been studied in different contexts and approaches, with focus on preserving privacy or matching records from different data sources. With an increasing number of service providers nowadays routinely collecting location traces of their users on unprecedented scales, there is a pronounced interest in the possibility of matching records and datasets based on spatial trajectories. Extending previous work on reidentifiability of spatial data and trajectory matching, we present the first large-scale analysis of user matchability in real mobility datasets on realistic scales, i.e. among two datasets that consist of several million people's mobility traces, coming from a mobile network operator and transportation smart card usage. We extract the relevant statistical properties which influence the matching process and analyze their impact on the matchability of users. We show that for individuals with typical activity in the transportation system (those making 3-4 trips per day on average), a matching algorithm based on the co-occurrence of their activities is expected to achieve a 16.8% success only after a one-week long observation of their mobility traces, and over 55% after four weeks. We show that the main determinant of matchability is the expected number of co-occurring records in the two datasets. Finally, we discuss different scenarios in terms of data collection frequency and give estimates of matchability over time. We show that with higher frequency data collection becoming more common, we can expect much higher success rates in even shorter intervals. | Towards matching user mobility traces in large-scale datasets | 9,852 |
In this paper we present a method to identify tweets that a user may find interesting enough to retweet. The method is based on a global, but personalized classifier, which is trained on data from several users, represented in terms of user-specific features. Thus, the method is trained on a sufficient volume of data, while also being able to make personalized decisions, i.e., the same post received by two different users may lead to different classification decisions. Experimenting with a collection of approx.\ 130K tweets received by 122 journalists, we train a logistic regression classifier, using a wide variety of features: the content of each tweet, its novelty, its text similarity to tweets previously posted or retweeted by the recipient or sender of the tweet, the network influence of the author and sender, and their past interactions. Our system obtains F1 approx. 0.9 using only 10 features and 5K training instances. | Identifying Retweetable Tweets with a Personalized Global Classifier | 9,853 |
Predicting investors reactions to financial and political news is important for the early detection of stock market jitters. Evidence from several recent studies suggests that online social media could improve prediction of stock market movements. However, utilizing such information to predict strong stock market fluctuations has not been explored so far. In this work, we propose a novel event detection method on Twitter, tailored to detect financial and political events that influence a specific stock market. The proposed approach applies a bursty topic detection method on a stream of tweets related to finance or politics followed by a classification process which filters-out events that do not influence the examined stock market. We train our classifier to recognise real events by using solely information about stock market volatility, without the need of manual labeling. We model Twitter events as feature vectors that encompass a rich variety of information, such as the geographical distribution of tweets, their polarity, information about their authors as well as information about bursty words associated with the event. We show that utilizing only information about tweets polarity, like most previous studies, results in wasting important information. We apply the proposed method on high-frequency intra-day data from the Greek and Spanish stock market and we show that our financial event detector successfully predicts most of the stock market jitters. | Linking Twitter Events With Stock Market Jitters | 9,854 |
Many Twitter users are bots. They can be used for spamming, opinion manipulation and online fraud. Recently we discovered the Star Wars botnet, consisting of more than 350,000 bots tweeting random quotations exclusively from Star Wars novels. The bots were exposed because they tweeted uniformly from any location within two rectangle-shaped geographic zones covering Europe and the USA, including sea and desert areas in the zones. In this paper, we report another unusual behaviour of the Star Wars bots, that the bots were created in bursts or batches, and they only tweeted in their first few minutes since creation. Inspired by this observation, we discovered an even larger Twitter botnet, the Bursty botnet with more than 500,000 bots. Our preliminary study showed that the Bursty botnet was directly responsible for a large-scale online spamming attack in 2012. Most bot detection algorithms have been based on assumptions of `common' features that were supposedly shared by all bots. Our discovered botnets, however, do not show many of those features; instead, they were detected by their distinct, unusual tweeting behaviours that were unknown until now. | Discovery of the Twitter Bursty Botnet | 9,855 |
Popular User-Review Social Networks (URSNs)---such as Dianping, Yelp, and Amazon---are often the targets of reputation attacks in which fake reviews are posted in order to boost or diminish the ratings of listed products and services. These attacks often emanate from a collection of accounts, called Sybils, which are collectively managed by a group of real users. A new advanced scheme, which we term elite Sybil attacks, recruits organically highly-rated accounts to generate seemingly-trustworthy and realistic-looking reviews. These elite Sybil accounts taken together form a large-scale sparsely-knit Sybil network for which existing Sybil fake-review defense systems are unlikely to succeed. In this paper, we conduct the first study to define, characterize, and detect elite Sybil attacks. We show that contemporary elite Sybil attacks have a hybrid architecture, with the first tier recruiting elite Sybil workers and distributing tasks by Sybil organizers, and with the second tier posting fake reviews for profit by elite Sybil workers. We design ElsieDet, a three-stage Sybil detection scheme, which first separates out suspicious groups of users, then identifies the campaign windows, and finally identifies elite Sybil users participating in the campaigns. We perform a large-scale empirical study on ten million reviews from Dianping, by far the most popular URSN service in China. Our results show that reviews from elite Sybil users are more spread out temporally, craft more convincing reviews, and have higher filter bypass rates. We also measure the impact of Sybil campaigns on various industries (such as cinemas, hotels, restaurants) as well as chain stores, and demonstrate that monitoring elite Sybil users over time can provide valuable early alerts against Sybil campaigns. | Smoke Screener or Straight Shooter: Detecting Elite Sybil Attacks in
User-Review Social Networks | 9,856 |
This paper presents an efficient method for generating and rendering photorealistic hair in two dimensional pictures. The method consists of three major steps. Simulating an artist drawing is used to design the rough hair shape. A convolution based filter is then used to generate photorealistic hair patches. A refine procedure is finally used to blend the boundaries of the patches with surrounding areas. This method can be used to create all types of photorealistic human hair (head hair, facial hair and body hair). It is also suitable for fur and grass generation. Applications of this method include: hairstyle designing/editing, damaged hair image restoration, human hair animation, virtual makeover of a human, and landscape creation. | Computer-Generated Photorealistic Hair | 9,857 |
Environment maps are used to simulate reflections off curved objects. We present a technique to reflect a user, or a group of users, in a real environment, onto a virtual object, in a virtual reality application, using the live video feeds from a set of cameras, in real-time. Our setup can be used in a variety of environments ranging from outdoor or indoor scenes. | Embedded Reflection Mapping | 9,858 |
The Persint program is designed for the three-dimensional representation of objects and for the interfacing and access to a variety of independent applications, in a fully interactive way. Facilities are provided for the spatial navigation and the definition of the visualization properties, in order to interactively set the viewing and viewed points, and to obtain the desired perspective. In parallel, applications may be launched through the use of dedicated interfaces, such as the interactive reconstruction and display of physics events. Recent developments have focalized on the interfacing to the XML ATLAS General Detector Description AGDD, making it a widely used tool for XML developers. The graphics capabilities of this program were exploited in the context of the ATLAS 2002 Muon Testbeam where it was used as an online event display, integrated in the online software framework and participating in the commissioning and debug of the detector system. | The Persint visualization program for the ATLAS experiment | 9,859 |
Many entities managed by HEP Software Frameworks represent spatial (3-dimensional) real objects. Effective definition, manipulation and visualization of such objects is an indispensable functionality. GraXML is a modular Geometric Modeling toolkit capable of processing geometric data of various kinds (detector geometry, event geometry) from different sources and delivering them in ways suitable for further use. Geometric data are first modeled in one of the Generic Models. Those Models are then used to populate powerful Geometric Model based on the Java3D technology. While Java3D has been originally created just to provide visualization of 3D objects, its light weight and high functionality allow an effective reuse as a general geometric component. This is possible also thanks to a large overlap between graphical and general geometric functionality and modular design of Java3D itself. Its graphical functionalities also allow a natural visualization of all manipulated elements. All these techniques have been developed primarily (or only) for the Java environment. It is, however, possible to interface them transparently to Frameworks built in other languages, like for example C++. The GraXML toolkit has been tested with data from several sources, as for example ATLAS and ALICE detector description and ATLAS event data. Prototypes for other sources, like Geometry Description Markup Language (GDML) exist too and interface to any other source is easy to add. | GraXML - Modular Geometric Modeler | 9,860 |
A new graphics client prototype for the HepRep protocol is presented. Based on modern toolkits and high level languages (C++ and Ruby), Fred is an experiment to test applicability of scripting facilities to the high energy physics event display domain. Its flexible structure, extensibility and the use of the HepRep protocol are key features for its use in the astroparticle experiment GLAST. | The FRED Event Display: an Extensible HepRep Client for GLAST | 9,861 |
HepRep is a generic, hierarchical format for description of graphics representables that can be augmented by physics information and relational properties. It was developed for high energy physics event display applications and is especially suited to client/server or component frameworks. The GLAST experiment, an international effort led by NASA for a gamma-ray telescope to launch in 2006, chose HepRep to provide a flexible, extensible and maintainable framework for their event display without tying their users to any one graphics application. To support HepRep in their GUADI infrastructure, GLAST developed a HepRep filler and builder architecture. The architecture hides the details of XML and CORBA in a set of base and helper classes allowing physics experts to focus on what data they want to represent. GLAST has two GAUDI services: HepRepSvc, which registers HepRep fillers in a global registry and allows the HepRep to be exported to XML, and CorbaSvc, which allows the HepRep to be published through a CORBA interface and which allows the client application to feed commands back to GAUDI (such as start next event, or run some GAUDI algorithm). GLAST's HepRep solution gives users a choice of client applications, WIRED (written in Java) or FRED (written in C++ and Ruby), and leaves them free to move to any future HepRep-compliant event display. | The Use of HepRep in GLAST | 9,862 |
We present an efficient and inexpensive to develop application for interactive high-performance parallel visualization. We extend popular APIs such as Open Inventor and VTK to support commodity-based cluster visualization. Our implementation follows a standard master/slave concept: the general idea is to have a ``Master'' node, which will intercept a sequential graphical user interface (GUI) and broadcast it to the ``Slave'' nodes. The interactions between the nodes are implemented using MPI. The parallel remote rendering uses Chromium. This paper is mainly the report of our implementation experiences. We present in detail the proposed model and key aspects of its implementation. Also, we present performance measurements, we benchmark and quantitatively demonstrate the dependence of the visualization speed on the data size and the network bandwidth, and we identify the singularities and draw conclusions on Chromium's sort-first rendering architecture. The most original part of this work is the combined use of Open Inventor and Chromium. | Application of interactive parallel visualization for commodity-based
clusters using visualization APIs | 9,863 |
Conventional visualization media such as MRI prints and computer screens are inherently two dimensional, making them incapable of displaying true 3D volume data sets. By applying only transparency or intensity projection, and ignoring light-matter interaction, results will likely fail to give optimal results. Little research has been done on using reflectance functions to visually separate the various segments of a MRI volume. We will explore if applying specific reflectance functions to individual anatomical structures can help in building an intuitive 2D image from a 3D dataset. We will test our hypothesis by visualizing a statistical analysis of the genetic influences on variations in human brain morphology because it inherently contains complex and many different types of data making it a good candidate for our approach | Visualization of variations in human brain morphology using
differentiating reflection functions | 9,864 |
This paper presents an algorithm that transforms color visual images, like photographs or paintings, into tactile graphics. In the algorithm, the edges of objects are detected and colors of the objects are estimated. Then, the edges and the colors are encoded into lines and textures in the output tactile image. Design of the method is substantiated by various qualities of haptic recognizing of images. Also, means of presentation of the tactile images in printouts are discussed. Example translated images are shown. | An Algorithm for Transforming Color Images into Tactile Graphics | 9,865 |
We develop multiple view visualization of higher dimensional data. Our work was chiefly motivated by the need to extract insight from four dimensional Quantum Chromodynamic (QCD) data. We develop visualization where multiple views, generally views of 3D projections or slices of a higher dimensional data, are tightly coupled not only by their specific order but also by a view synchronizing interaction style, and an internally defined interaction language. The tight coupling of the different views allows a fast and well-coordinated exploration of the data. In particular, the visualization allowed us to easily make consistency checks of the 4D QCD data and to infer the correctness of particle properties calculations. The software developed was also successfully applied in material studies, in particular studies of meteorite properties. Our implementation uses the VTK API. To handle a large number of views (slices/projections) and to still maintain good resolution, we use IBM T221 display (3840 X 2400 pixels). | Interactive visualization of higher dimensional data in a multiview
environment | 9,866 |
A procedure for interpolating between specified points of a curve or surface is described. The method guarantees slope continuity at all junctions. A surface panel divided into p x q contiguous patches is completely specified by the coordinates of (p+1) x (q+1) points. Each individual patch, however, depends parametrically on the coordinates of 16 points, allowing shape flexibility and global conformity. | Analytic Definition of Curves and Surfaces by Parabolic Blending | 9,867 |
The past two decades showed a rapid growing of physically-based modeling of fluids for computer graphics applications. In this area, a common top down approach is to model the fluid dynamics by Navier-Stokes equations and apply a numerical techniques such as Finite Differences or Finite Elements for the simulation. In this paper we focus on fluid modeling through Lattice Gas Cellular Automata (LGCA) for computer graphics applications. LGCA are discrete models based on point particles that move on a lattice, according to suitable and simple rules in order to mimic a fully molecular dynamics. By Chapman-Enskog expansion, a known multiscale technique in this area, it can be demonstrated that the Navier-Stokes model can be reproduced by the LGCA technique. Thus, with LGCA we get a fluid model that does not require solution of complicated equations. Therefore, we combine the advantage of the low computational cost of LGCA and its ability to mimic the realistic fluid dynamics to develop a new animating framework for computer graphics applications. In this work, we discuss the theoretical elements of our proposal and show experimental results. | Lattice Gas Cellular Automata for Computational Fluid Animation | 9,868 |
Von Neuman's work on universal machines and the hardware development have allowed the simulation of dynamical systems through a large set of interacting agents. This is a bottom-up approach which tries to derive global properties of a complex system through local interaction rules and agent behaviour. Traditionally, such systems are modeled and simulated through top-down methods based on differential equations. Agent-Based Modeling has the advantage of simplicity and low computational cost. However, unlike differential equations, there is no standard way to express agent behaviour. Besides, it is not clear how to analytically predict the results obtained by the simulation. In this paper we survey some of these methods. For expressing agent behaviour formal methods, like Stochastic Process Algebras have been used. Such approach is useful if the global properties of interest can be expressed as a function of stochastic time series. However, if space variables must be considered, we shall change the focus. In this case, multiscale techniques, based on Chapman-Enskog expansion, was used to establish the connection between the microscopic dynamics and the macroscopic observables. Also, we use data mining techniques,like Principal Component Analysis (PCA), to study agent systems like Cellular Automata. With the help of these tools we will discuss a simple society model, a Lattice Gas Automaton for fluid modeling, and knowledge discovery in CA databases. Besides, we show the capabilities of the NetLogo, a software for agent simulation of complex system and show our experience about. | Methods for Analytical Understanding of Agent-Based Modeling of Complex
Systems | 9,869 |
This article introduces a Mathematica package providing a graphics export function that automatically replaces Mathematica expressions in a graphic by the corresponding LaTeX constructs and positions them correctly. It thus facilitates the creation of publication-quality Enscapulated PostScript (EPS) graphics. | MathPSfrag: Creating Publication-Quality Labels in Mathematica Plots | 9,870 |
We define a Graphics Turing Test to measure graphics performance in a similar manner to the definition of the traditional Turing Test. To pass the test one needs to reach a computational scale, the Graphics Turing Scale, for which Computer Generated Imagery becomes comparatively indistinguishable from real images while also being interactive. We derive an estimate for this computational scale which, although large, is within reach of todays supercomputers. We consider advantages and disadvantages of various computer systems designed to pass the Graphics Turing Test. Finally we discuss commercial applications from the creation of such a system, in particular Interactive Cinema. | Graphics Turing Test | 9,871 |
We describe a system that lets a designer interactively draw patterns of strokes in the picture plane, then guide the synthesis of similar patterns over new picture regions. Synthesis is based on an initial user-assisted analysis phase in which the system recognizes distinct types of strokes (hatching and stippling) and organizes them according to perceptual grouping criteria. The synthesized strokes are produced by combining properties (eg. length, orientation, parallelism, proximity) of the stroke groups extracted from the input examples. We illustrate our technique with a drawing application that allows the control of attributes and scale-dependent reproduction of the synthesized patterns. | Interactive Hatching and Stippling by Example | 9,872 |
The paper describes a new image processing for a non-photorealistic rendering. The algorithm is based on a random generation of gray tones and competing statistical requirements. The gray tone value of each pixel in the starting image is replaced selecting among randomly generated tone values, according to the statistics of nearest-neighbor and next-nearest-neighbor pixels. Two competing conditions for replacing the tone values - one position on the local mean value the other on the local variance - produce a peculiar pattern on the image. This pattern has a labyrinthine tiling aspect. For certain subjects, the pattern enhances the look of the image. | Non-photorealistic image rendering with a labyrinthine tiling | 9,873 |
We have recently developed an algorithm for vector field visualization with oriented streamlines, able to depict the flow directions everywhere in a dense vector field and the sense of the local orientations. The algorithm has useful applications in the visualization of the director field in nematic liquid crystals. Here we propose an improvement of the algorithm able to enhance the visualization of the local magnitude of the field. This new approach of the algorithm is compared with the same procedure applied to the Line Integral Convolution (LIC) visualization. | Vector field visualization with streamlines | 9,874 |
Shape preservation behavior of a spline consists of criterial conditions for preserving convexity, inflection, collinearity, torsion and coplanarity shapes of data polgonal arc. We present our results which acts as an improvement in the definitions of and provide geometrical insight into each of the above shape preservation criteria. We also investigate the effect of various results from the literature on various shape preservation criteria. These results have not been earlier refered in the context of shape preservation behaviour of splines. We point out that each curve segment need to satisfy more than one shape preservation criteria. We investigate the conflict between different shape preservation criteria 1)on each curve segment and 2)of adjacent curve segments. We derive simplified formula for shape preservation criteria for cubic curve segments. We study the shape preservation behavior of cubic Catmull-Rom splines and see that, though being very simple spline curve, it indeed satisfy all the shape preservation criteria. | Shape preservation behavior of spline curves | 9,875 |
The next generation of virtual environments for training is oriented towards collaborative aspects. Therefore, we have decided to enhance our platform for virtual training environments, adding collaboration opportunities and integrating humanoids. In this paper we put forward a model of humanoid that suits both virtual humans and representations of real users, according to collaborative training activities. We suggest adaptations to the scenario model of our platform making it possible to write collaborative procedures. We introduce a mechanism of action selection made up of a global repartition and an individual choice. These models are currently being integrated and validated in GVT, a virtual training tool for maintenance of military equipments, developed in collaboration with the French company NEXTER-Group. | Virtual Environments for Training: From Individual Learning to
Collaboration with Humanoids | 9,876 |
This paper describes the implementation and evaluation of an open source library for mathematical morphology based on packed binary and run-length compressed images for document imaging applications. Abstractions and patterns useful in the implementation of the interval operations are described. A number of benchmarks and comparisons to bit-blit based implementations on standard document images are provided. | Efficient Binary and Run Length Morphology and its Application to
Document Image Processing | 9,877 |
A new computer haptics algorithm to be used in general interactive manipulations of deformable virtual objects is presented. In multimodal interactive simulations, haptic feedback computation often comes from contact forces. Subsequently, the fidelity of haptic rendering depends significantly on contact space modeling. Contact and friction laws between deformable models are often simplified in up to date methods. They do not allow a "realistic" rendering of the subtleties of contact space physical phenomena (such as slip and stick effects due to friction or mechanical coupling between contacts). In this paper, we use Signorini's contact law and Coulomb's friction law as a computer haptics basis. Real-time performance is made possible thanks to a linearization of the behavior in the contact space, formulated as the so-called Delassus operator, and iteratively solved by a Gauss-Seidel type algorithm. Dynamic deformation uses corotational global formulation to obtain the Delassus operator in which the mass and stiffness ratio are dissociated from the simulation time step. This last point is crucial to keep stable haptic feedback. This global approach has been packaged, implemented, and tested. Stable and realistic 6D haptic feedback is demonstrated through a clipping task experiment. | Realistic Haptic Rendering of Interacting Deformable Objects in Virtual
Environments | 9,878 |
We consider perturbations of the complex quadratic map $ z \to z^2 +c$ and corresponding changes in their quasi-Mandelbrot sets. Depending on particular perturbation, visual forms of quasi-Mandelbrot set changes either sharply (when the perturbation reaches some critical value) or continuously. In the latter case we have a smooth transition from the classical form of the set to some forms, constructed from mostly linear structures, as it is typical for two-dimensional real number dynamics. Two examples of continuous evolution of the quasi-Mandelbrot set are described. | Quasi-Mandelbrot sets for perturbed complex analytic maps: visual
patterns | 9,879 |
In this work, we present a new method for generating a threshold structure. This kind of structure can be advantageously used in various halftoning algorithms such as clustered-dot or dispersed-dot dithering, error diffusion with threshold modulation, etc. The proposed method is based on rectifiable polyominoes -- a non-periodic hierarchical structure, which tiles the Euclidean plane with no gaps. Each polyomino contains a fixed number of discrete threshold values. Thanks to its inherent non-periodic nature combined with off-line optimization of threshold values, our polyomino-based threshold structure shows blue-noise spectral properties. The halftone images produced with this threshold structure have high visual quality. Although the proposed method is general, and can be applied on any polyomino tiling, we consider one particular case: tiling with G-hexominoes. We compare our polyomino-based threshold structure with the best known state-of-the-art methods for generation threshold matrices, and conclude considerable improvement achieved with our method. | Polyomino-Based Digital Halftoning | 9,880 |
This thesis presents a two-layer uniform facet elastic object for real-time simulation based on physics modeling method. It describes the elastic object procedural modeling algorithm with particle system from the simplest one-dimensional object, to more complex two-dimensional and three-dimensional objects. The double-layered elastic object consists of inner and outer elastic mass spring surfaces and compressible internal pressure. The density of the inner layer can be set different from the density of the outer layer; the motion of the inner layer can be opposite to the motion of the outer layer. These special features, which cannot be achieved by a single layered object, result in improved imitation of a soft body, such as tissue's liquidity non-uniform deformation. The construction of the double-layered elastic object is closer to the real tissue's physical structure. The inertial behavior of the elastic object is well illustrated in environments with gravity and collisions with walls, ceiling, and floor. The collision detection is defined by elastic collision penalty method and the motion of the object is guided by the Ordinary Differential Equation computation. Users can interact with the modeled objects, deform them, and observe the response to their action in real time. | Dynamic Deformation of Uniform Elastic Two-Layer Objects | 9,881 |
This game is meant to be extension of the overly-beaten pacman-style game (code-named "Yet Another Pacman 3D Adventures", or YAP3DAD) from the proposed ideas and other projects with advance visual and computer graphics features, including a-game-in-a-game approach. The project is an open-source project published on SourceForge.net for possible future development and extension. | Yet Another Pacman 3D Adventures | 9,882 |
Many image watermarking schemes have been proposed in recent years, but they usually involve embedding a watermark to the entire image without considering only a particular object in the image, which the image owner may be interested in. This paper proposes a watermarking scheme that can embed a watermark to an arbitrarily shaped object in an image. Before embedding, the image owner specifies an object of arbitrary shape that is of a concern to him. Then the object is transformed into the wavelet domain using in place lifting shape adaptive DWT(SADWT) and a watermark is embedded by modifying the wavelet coefficients. In order to make the watermark robust and transparent, the watermark is embedded in the average of wavelet blocks using the visual model based on the human visual system. Wavelet coefficients n least significant bits (LSBs) are adjusted in concert with the average. Simulation results shows that the proposed watermarking scheme is perceptually invisible and robust against many attacks such as lossy compression (e.g.JPEG, JPEG2000), scaling, adding noise, filtering, etc. | Digital Image Watermarking for Arbitrarily Shaped Objects Based On
SA-DWT | 9,883 |
In this paper a novel spatial domain LSB based watermarking scheme for color Images is proposed. The proposed scheme is of type blind and invisible watermarking. Our scheme introduces the concept of storing variable number of bits in each pixel based on the actual color value of pixel. Equal or higher the color value of channels with respect to intensity of pixel stores higher number of watermark bits. The Red, Green and Blue channel of the color image has been used for watermark embedding. The watermark is embedded into selected channels of pixel. The proposed method supports high watermark embedding capacity, which is equivalent to the size of cover image. The security of watermark is preserved by permuting the watermark bits using secret key. The proposed scheme is found robust to various image processing operations such as image compression, blurring, salt and pepper noise, filtering and cropping. | Secure Watermarking Scheme for Color Image Using Intensity of Pixel and
LSB Substitution | 9,884 |
In this paper a new watermarking scheme is presented based on log-average luminance. A colored-image is divided into blocks after converting the RGB colored image to YCbCr color space. A monochrome image of 1024 bytes is used as the watermark. To embed the watermark, 16 blocks of size 8X8 are selected and used to embed the watermark image into the original image. The selected blocks are chosen spirally (beginning form the center of the image) among the blocks that have log-average luminance higher than or equal the log-average luminance of the entire image. Each byte of the monochrome watermark is added by updating a luminance value of a pixel of the image. If the byte of the watermark image represented white color (255) a value <alpha> is added to the image pixel luminance value, if it is black (0) the <alpha> is subtracted from the luminance value. To extract the watermark, the selected blocks are chosen as the above, if the difference between the luminance value of the watermarked image pixel and the original image pixel is greater than 0, the watermark pixel is supposed to be white, otherwise it supposed to be black. Experimental results show that the proposed scheme is efficient against changing the watermarked image to grayscale, image cropping, and JPEG compression. | Spatial Domain Watermarking Scheme for Colored Images Based on
Log-average Luminance | 9,885 |
Separation of the text regions from background texture and graphics is an important step of any optical character recognition system for the images containing both texts and graphics. In this paper, we have presented a novel text/graphics separation technique and a method for skew correction of text regions extracted from business card images captured with a cell-phone camera. At first, the background is eliminated at a coarse level based on intensity variance. This makes the foreground components distinct from each other. Then the non-text components are removed using various characteristic features of text and graphics. Finally, the text regions are skew corrected for further processing. Experimenting with business card images of various resolutions, we have found an optimum performance of 98.25% (recall) with 0.75 MP images, that takes 0.17 seconds processing time and 1.1 MB peak memory on a moderately powerful computer (DualCore 1.73 GHz Processor, 1 GB RAM, 1 MB L2 Cache). The developed technique is computationally efficient and consumes low memory so as to be applicable on mobile devices. | Text/Graphics Separation and Skew Correction of Text Regions of Business
Card Images for Mobile Devices | 9,886 |
This paper describes visualization of chaotic attractor and elements of the singularities in 3D space. 3D view of these effects enables to create a demonstrative projection about relations of chaos generated by physical circuit, the Chua's circuit. Via macro views on chaotic attractor is obtained not only visual space illustration of representative point motion in state space, but also its relation to planes of singularity elements. Our created program enables view on chaotic attractor both in 2D and 3D space together with plane objects visualization -- elements of singularities. | Macro and micro view on steady states in state space | 9,887 |
Many times we need to plot 3-D functions e.g., in many scientificc experiments. To plot this 3-D functions on 2-D screen it requires some kind of mapping. Though OpenGL, DirectX etc 3-D rendering libraries have made this job very simple, still these libraries come with many complex pre- operations that are simply not intended, also to integrate these libraries with any kind of system is often a tough trial. This article presents a very simple method of mapping from 3D to 2D, that is free from any complex pre-operation, also it will work with any graphics system where we have some primitive 2-D graphics function. Also we discuss the inverse transform and how to do basic computer graphics transformations using our coordinate mapping system. | A Very Simple Approach for 3-D to 2-D Mapping | 9,888 |
Separation of the text regions from background texture and graphics is an important step of any optical character recognition sytem for the images containg both texts and graphics. In this paper, we have presented a novel text/graphics separation technique for business card images captured with a cell-phone camera. At first, the background is eliminated at a coarse level based on intensity variance. This makes the foreground components distinct from each other. Then the non-text components are removed using various characteristic features of text and graphics. Finally, the text regions are skew corrected and binarized for further processing. Experimenting with business card images of various resolutions, we have found an optimum performance of 98.54% with 0.75 MP images, that takes 0.17 seconds processing time and 1.1 MB peak memory on a moderately powerful computer (DualCore 1.73 GHz Processor, 1 GB RAM, 1 MB L2 Cache). The developed technique is computationally efficient and consumes low memory so as to be applicable on mobile devices. | Text/Graphics Separation for Business Card Images for Mobile Devices | 9,889 |
We present a novel approach to finding critical points in cell-wise barycentrically or bilinearly interpolated vector fields on surfaces. The Poincar\e index of the critical points is determined by investigating the qualitative behavior of 0-level sets of the interpolants of the vector field components in parameter space using precomputed combinatorial results, thus avoiding the computation of the Jacobian of the vector field at the critical points in order to determine its index. The locations of the critical points within a cell are determined analytically to achieve accurate results. This approach leads to a correct treatment of cases with two first-order critical points or one second-order critical point of bilinearly interpolated vector fields within one cell, which would be missed by examining the linearized field only. We show that for the considered interpolation schemes determining the index of a critical point can be seen as a coloring problem of cell edges. A complete classification of all possible colorings in terms of the types and number of critical points yielded by each coloring is given using computational group theory. We present an efficient algorithm that makes use of these precomputed classifications in order to find and classify critical points in a cell-by-cell fashion. Issues of numerical stability, construction of the topological skeleton, topological simplification, and the statistics of the different types of critical points are also discussed. | Finding and Classifying Critical Points of 2D Vector Fields: A
Cell-Oriented Approach Using Group Theory | 9,890 |
In this thesis a rendering system and an accompanying tool chain for Virtual Texturing is presented. Our tools allow to automatically retexture existing geometry in order to apply unique texturing on each face. Furthermore we investigate several techniques that try to minimize visual artifacts in the case that only a small amount of pages can be streamed per frame. We analyze the influence of different heuristics that are responsible for the page selection. Alongside these results we present a measurement method to allow the comparison of our heuristics. | Virtual Texturing | 9,891 |
The Phong illumination model is still widely used in realtime 3D visualization systems. The aim of this article is to document problems with the Phong illumination model that are encountered by an important professional user group, namely digital designers. This leads to a visual evaluation of Phong illumination, which at least in this condensed form seems still to be missing in the literature. It is hoped that by explicating these flaws, awareness about the limitations and interdependencies of the model will increase, both among fellow users, and among researchers and developers. | What's wrong with Phong - Designers' appraisal of shading in CAD-systems | 9,892 |
The paper proposes an ancient landscape design as an example of graphic design for an age and place where no written documents existed. It is created by a network of earthworks, which constitute the remains of an extensive ancient agricultural system. It can be seen by means of the Google satellite imagery on the Peruvian region near the Titicaca Lake, as a texture superimposed to the background landform. In this texture, many drawings (geoglyphs) can be observed. | Geoglyphs of Titicaca as an ancient example of graphic design | 9,893 |
In this work SVG will be translated into VML or HTML by using Javascript based on Backbase Client Framework. The target of this project is to implement SVG to be viewed in Internet Explorer without any plug-in and work together with other Backbase Client Framework languages. The result of this project will be added as an extension to the current Backbase Client Framework. | Across Browsers SVG Implementation | 9,894 |
This paper presents a high speed and area efficient DWT processor based design for Image Compression applications. In this proposed design, pipelined partially serial architecture has been used to enhance the speed along with optimal utilization and resources available on target FPGA. The proposed model has been designed and simulated using Simulink and System Generator blocks, synthesized with Xilinx Synthesis tool (XST) and implemented on Spartan 2 and 3 based XC2S100-5tq144 and XC3S500E-4fg320 target device. The results show that proposed design can operate at maximum frequency 231 MHz in case of Spartan 3 by consuming power of 117mW at 28 degree/c junction temperature. The result comparison has shown an improvement of 15% in speed. | High Speed and Area Efficient 2D DWT Processor based Image Compression"
Signal & Image Processing | 9,895 |
Color quantization is an important operation with many applications in graphics and image processing. Most quantization methods are essentially based on data clustering algorithms. However, despite its popularity as a general purpose clustering algorithm, k-means has not received much respect in the color quantization literature because of its high computational requirements and sensitivity to initialization. In this paper, we investigate the performance of k-means as a color quantizer. We implement fast and exact variants of k-means with several initialization schemes and then compare the resulting quantizers to some of the most popular quantizers in the literature. Experiments on a diverse set of images demonstrate that an efficient implementation of k-means with an appropriate initialization strategy can in fact serve as a very effective color quantizer. | Improving the Performance of K-Means for Color Quantization | 9,896 |
We present a novel method of simulating wave effects in graphics using ray--based renderers with a new function: the Wave BSDF (Bidirectional Scattering Distribution Function). Reflections from neighboring surface patches represented by local BSDFs are mutually independent. However, in many surfaces with wavelength-scale microstructures, interference and diffraction requires a joint analysis of reflected wavefronts from neighboring patches. We demonstrate a simple method to compute the BSDF for the entire microstructure, which can be used independently for each patch. This allows us to use traditional ray--based rendering pipelines to synthesize wave effects of light and sound. We exploit the Wigner Distribution Function (WDF) to create transmissive, reflective, and emissive BSDFs for various diffraction phenomena in a physically accurate way. In contrast to previous methods for computing interference, we circumvent the need to explicitly keep track of the phase of the wave by using BSDFs that include positive as well as negative coefficients. We describe and compare the theory in relation to well understood concepts in rendering and demonstrate a straightforward implementation. In conjunction with standard raytracers, such as PBRT, we demonstrate wave effects for a range of scenarios such as multi--bounce diffraction materials, holograms and reflection of high frequency surfaces. | Ray-Based Reflectance Model for Diffraction | 9,897 |
Gliomas are the most common primary brain tumors, evolving from the cerebral supportive cells. For clinical follow-up, the evaluation of the preoperative tumor volume is essential. Volumetric assessment of tumor volume with manual segmentation of its outlines is a time-consuming process that can be overcome with the help of computer-assisted segmentation methods. In this paper, a semi-automatic approach for World Health Organization (WHO) grade IV glioma segmentation is introduced that uses balloon inflation forces, and relies on the detection of high-intensity tumor boundaries that are coupled by using contrast agent gadolinium. The presented method is evaluated on 27 magnetic resonance imaging (MRI) data sets and the ground truth data of the tumor boundaries - for evaluation of the results - are manually extracted by neurosurgeons. | Glioblastoma Multiforme Segmentation in MRI Data with a Balloon
Inflation Approach | 9,898 |
Simulation of human motion is the subject of study in a number of disciplines: Biomechanics, Robotics, Computer Animation, Control Theory, Neurophysiology, Medicine, Ergonomics. Since the author has never visited any of these fields, this review is indeed a passer-by's impression. On the other hand, he happens to be a human (who occasionally is moving) and, as everybody else, rates himself an expert in Applied Common Sense. Thus the author hopes that this view from the {\em outside} will be of some interest not only for the strangers like himself, but for those who are {\em inside} as well. Two flaws of the text that follows are inevitable. First, some essential issues that are too familar to the specialists to discuss them may be missing. Second, the author probably failed to provide the uniform "level-of-detail" for this wide range of topics. | Mathematics of Human Motion: from Animation towards Simulation (A View
form the Outside) | 9,899 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.