aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1507.04314
2197211043
Community-based question answering platforms can be rich sources of information on a variety of specialized topics, from finance to cooking. The usefulness of such platforms depends heavily on user contributions (questions and answers), but also on respecting the community rules. As a crowd-sourced service, such platforms rely on their users for monitoring and flagging content that violates community rules. Common wisdom is to eliminate the users who receive many flags. Our analysis of a year of traces from a mature Q&A site shows that the number of flags does not tell the full story: on one hand, users with many flags may still contribute positively to the community. On the other hand, users who never get flagged are found to violate community rules and get their accounts suspended. This analysis, however, also shows that abusive users are betrayed by their network properties: we find strong evidence of homophilous behavior and use this finding to detect abusive users who go under the community radar. Based on our empirical observations, we build a classifier that is able to detect abusive users with an accuracy as high as 83 .
As for applications, research has proposed effective ways of recommending questions to the most appropriate answerers @cite_19 @cite_27 , of automatically answering questions based on past answers @cite_24 , and of retrieving factual answers @cite_15 or factual bits within an answer @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_24", "@cite_19", "@cite_27", "@cite_15" ], "mid": [ "1991872311", "2151280665", "2164491644", "2171258484", "2126776599" ], "abstract": [ "We investigate the problem of mining \"tips\" from Yahoo! Answers and displaying those tips in response to related web queries. Here, a \"tip\" is a short, concrete and self-contained bit of non-obvious advice such as \"To zest a lime if you don't have a zester : use a cheese grater.\" First, we estimate the volume of web queries with \"how-to\" intent, which could be potentially addressed by a tip. Second, we analyze how to detect such queries automatically without solely relying on literal \"how to *\" patterns. Third, we describe how to derive potential tips automatically from Yahoo! Answers, and we develop machine-learning techniques to remove low-quality tips. Finally, we discuss how to match web queries with \"how-to\" intent to tips. We evaluate both the quality of these direct displays as well as the size of the query volume that can be addressed by serving tips.", "Community-based Question Answering sites, such as Yahoo! Answers or Baidu Zhidao, allow users to get answers to complex, detailed and personal questions from other users. However, since answering a question depends on the ability and willingness of users to address the asker's needs, a significant fraction of the questions remain unanswered. We measured that in Yahoo! Answers, this fraction represents 15 of all incoming English questions. At the same time, we discovered that around 25 of questions in certain categories are recurrent, at least at the question-title level, over a period of one year. We attempt to reduce the rate of unanswered questions in Yahoo! Answers by reusing the large repository of past resolved questions, openly available on the site. More specifically, we estimate the probability whether certain new questions can be satisfactorily answered by a best answer from the past, using a statistical model specifically trained for this task. We leverage concepts and methods from query-performance prediction and natural language processing in order to extract a wide range of features for our model. The key challenge here is to achieve a level of quality similar to the one provided by the best human answerers. We evaluated our algorithm on offline data extracted from Yahoo! Answers, but more interestingly, also on online data by using three \"live\" answering robots that automatically provide past answers to new questions when a certain degree of confidence is reached. We report the success rate of these robots in three active Yahoo! Answers categories in terms of both accuracy, coverage and askers' satisfaction. This work presents a first attempt, to the best of our knowledge, of automatic question answering to questions of social nature, by reusing past answers of high quality.", "User-Interactive Question Answering (QA) communities such as Yahoo! Answers are growing in popularity. However, as these QA sites always have thousands of new questions posted daily, it is difficult for users to find the questions that are of interest to them. Consequently, this may delay the answering of the new questions. This gives rise to question recommendation techniques that help users locate interesting questions. In this paper, we adopt the Probabilistic Latent Semantic Analysis (PLSA) model for question recommendation and propose a novel metric to evaluate the performance of our approach. The experimental results show our recommendation approach is effective.", "What makes a good question recommendation system for community question-answering sites? First, to maintain the health of the ecosystem, it needs to be designed around answerers, rather than exclusively for askers. Next, it needs to scale to many questions and users, and be fast enough to route a newly-posted question to potential answerers within the few minutes before the asker's patience runs out. It also needs to show each answerer questions that are relevant to his or her interests. We have designed and built such a system for Yahoo! Answers, but realized, when testing it with live users, that it was not enough. We found that those drawing-board requirements fail to capture user's interests. The feature that they really missed was diversity. In other words, showing them just the main topics they had previously expressed interest in was simply too dull. Adding the spice of topics slightly outside the core of their past activities significantly improved engagement. We conducted a large-scale online experiment in production in Yahoo! Answers that showed that recommendations driven by relevance alone perform worse than a control group without question recommendations, which is the current behavior. However, an algorithm promoting both diversity and freshness improved the number of answers by 17 , daily session length by 10 , and had a significant positive impact on peripheral activities such as voting.", "Community Question Answering has emerged as a popular and effective paradigm for a wide range of information needs. For example, to find out an obscure piece of trivia, it is now possible and even very effective to post a question on a popular community QA site such as Yahoo! Answers, and to rely on other users to provide answers, often within minutes. The importance of such community QA sites is magnified as they create archives of millions of questions and hundreds of millions of answers, many of which are invaluable for the information needs of other searchers. However, to make this immense body of knowledge accessible, effective answer retrieval is required. In particular, as any user can contribute an answer to a question, the majority of the content reflects personal, often unsubstantiated opinions. A ranking that combines both relevance and quality is required to make such archives usable for factual information retrieval. This task is challenging, as the structure and the contents of community QA archives differ significantly from the web setting. To address this problem we present a general ranking framework for factual information retrieval from social media. Results of a large scale evaluation demonstrate that our method is highly effective at retrieving well-formed, factual answers to questions, as evaluated on a standard factoid QA benchmark. We also show that our learning framework can be tuned with the minimum of manual labeling. Finally, we provide result analysis to gain deeper understanding of which features are significant for social media search and retrieval. Our system can be used as a crucial building block for combining results from a variety of social media content with general web search results, and to better integrate social media content for effective information access." ] }
1507.04314
2197211043
Community-based question answering platforms can be rich sources of information on a variety of specialized topics, from finance to cooking. The usefulness of such platforms depends heavily on user contributions (questions and answers), but also on respecting the community rules. As a crowd-sourced service, such platforms rely on their users for monitoring and flagging content that violates community rules. Common wisdom is to eliminate the users who receive many flags. Our analysis of a year of traces from a mature Q&A site shows that the number of flags does not tell the full story: on one hand, users with many flags may still contribute positively to the community. On the other hand, users who never get flagged are found to violate community rules and get their accounts suspended. This analysis, however, also shows that abusive users are betrayed by their network properties: we find strong evidence of homophilous behavior and use this finding to detect abusive users who go under the community radar. Based on our empirical observations, we build a classifier that is able to detect abusive users with an accuracy as high as 83 .
Qualitative and quantitative studies of bad behavior in online settings have been done before including newsgroups @cite_1 , online chat communities @cite_4 , and online multiplayer video games @cite_25 . A body of work also investigates the impact of the bad behavior. Researchers find that bad behavior has negative effects on the community and its members: it decreases community's cohesion @cite_18 , performance @cite_2 and participation @cite_30 . In the worst case, users who are the targets of bad behavior may leave or avoid online social spaces @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_4", "@cite_1", "@cite_2", "@cite_25" ], "mid": [ "", "2117059101", "2088612179", "1978976907", "2051811470", "1964767137" ], "abstract": [ "", "This study investigated the effect of a single work group deviant on other group members' perceptions of the deviant, and their perceptions of the cohesiveness of the group as a whole. Group members, particularly those high in perceived self-typicality, were expected to downgrade the deviant, and view groups containing a deviant as less cohesive. Undergraduate management students were placed in a simulated organizational context in which deviance was manipulated so that the participant's work group contained either a single negative deviant or no deviant. Results showed that the deviant colleague was judged less favorably than the normative colleague, particularly by those high in perceived self-typicality. Groups that contained a deviant were perceived as having lower levels of task cohesion, but ratings of social cohesion varied depending on perceivers' self-typicality. The findings suggest that as well as attracting negative evaluations, deviant group members can adversely affect group cohesion.", "A wide variety of deviant behavior may arise as the population of an online multimedia community increases. That behavior can span the range from simple mischievous antics to more serious expressions of psychopathology, including depression, sociopathy, narcissism, dissociation, and borderline dynamics. In some cases the deviant behavior may be a process of pathological acting out—in others, a healthy attempt to work through. Several factors must be taken into consideration when explaining online deviance, such as social cultural issues, the technical infrastructure of the environment, transference reactions, and the effects of the ambiguous, anonymous, and fantasy-driven atmosphere of cyberspace life. In what we may consider an \"online community psychology,\" intervention strategies for deviant behavior can be explored along three dimensions: preventative versus remedial, user versus superuser based, and automated versus interpersonal.", "This article is an account of a Usenet newsgroup whose participants, in response to a perceived ''invasion'' of ''barbarians,'' explored and articulated the value of the group, the nature of the crisis facing it, and the strategies available to meet the crisis. The newsgroup facilitated political and personal support for some gay, lesbian, or bisexual men and women. The primary threat to the group was the increasing number of newcomers who were oblivious to established norms, who tended to view access to the group as a commodity, and who attempted to impose ''outside'' paradigms on the operations of the group. Defensive strategies involved calling on rhetorical devices (such as flaming or ostracism) or structural resources (such as employers, network operators, or lawsuits). All strategies had the potential to backfire, but rhetorical strategies were less risky, more available, and more community affirming than strategies requiring access to structural resources. Through this account, the article addresse...", "The influences of organizational citizenship behavior (OCB) and workplace deviant behavior (WDB) on business unit performance were investigated using data from branches of a fast food organization. Data included measures of WDB and OCB obtained from staff, ratings of performance provided by supervisors, and objective measures of performance. It was found that WDB was negatively and significantly associated with business unit performance measured both subjectively and objectively. OCB, however, failed to contribute to the prediction of business unit performance beyond the level that was achieved by WDB. It appeared, therefore, that the presence of deviant employees among business units impinges upon the performance of the business unit as a whole, whereas OCBs had comparatively little effect. Copyright © 2004 John Wiley & Sons, Ltd.", "Online gaming is a multi-billion dollar industry that entertains a large, global population. One unfortunate phenomenon, however, poisons the competition and the fun: cheating. The costs of cheating span from industry-supported expenditures to detect and limit cheating, to victims' monetary losses due to cyber crime. This paper studies cheaters in the Steam Community, an online social network built on top of the world's dominant digital game delivery platform. We collected information about more than 12 million gamers connected in a global social network, of which more than 700 thousand have their profiles flagged as cheaters. We also collected in-game interaction data of over 10 thousand players from a popular multiplayer gaming server. We show that cheaters are well embedded in the social and interaction networks: their network position is largely indistinguishable from that of fair players. We observe that the cheating behavior appears to spread through a social mechanism: the presence and the number of cheater friends of a fair player is correlated with the likelihood of her becoming a cheater in the future. Also, we observe that there is a social penalty involved with being labeled as a cheater: cheaters are likely to switch to more restrictive privacy settings once they are tagged and they lose more friends than fair players. Finally, we observe that the number of cheaters is not correlated with the geographical, real-world population density, or with the local popularity of the Steam Community." ] }
1507.04314
2197211043
Community-based question answering platforms can be rich sources of information on a variety of specialized topics, from finance to cooking. The usefulness of such platforms depends heavily on user contributions (questions and answers), but also on respecting the community rules. As a crowd-sourced service, such platforms rely on their users for monitoring and flagging content that violates community rules. Common wisdom is to eliminate the users who receive many flags. Our analysis of a year of traces from a mature Q&A site shows that the number of flags does not tell the full story: on one hand, users with many flags may still contribute positively to the community. On the other hand, users who never get flagged are found to violate community rules and get their accounts suspended. This analysis, however, also shows that abusive users are betrayed by their network properties: we find strong evidence of homophilous behavior and use this finding to detect abusive users who go under the community radar. Based on our empirical observations, we build a classifier that is able to detect abusive users with an accuracy as high as 83 .
The communication networks behind CQA sites have been recently studied. More specifically, researchers have explored the relationship between content quality and network properties such as number of followers @cite_6 and tie strength @cite_16 .
{ "cite_N": [ "@cite_16", "@cite_6" ], "mid": [ "2118557062", "1730818938" ], "abstract": [ "Asking friends, colleagues, or other trusted people to help answer a question or find information is a familiar and tried-and-true concept. Widespread use of online social networks has made social information seeking easier, and has provided researchers with opportunities to better observe this process. In this paper, we relate question answering to tie strength, a metric drawn from sociology describing how close a friendship is. We present a study evaluating the role of tie strength in question answers. We used previous research on tie strength in social media to generate tie strength information between participants and their answering friends, and asked them for feedback about the value of answers across several dimensions. While sociological studies have indicated that weak ties are able to provide better information, our findings are significant in that weak ties do not have this effect, and stronger ties (close friends) provide a subtle increase in information that contributes more to participants' overall knowledge, and is less likely to have been seen before.", "Efforts such as Wikipedia have shown the ability of user communities to collect, organize and curate information on the Internet. Recently, a number of question and answer (Q&A) sites have successfully built large growing knowledge repositories, each driven by a wide range of questions and answers from its users community. While sites like Yahoo Answers have stalled and begun to shrink, one site still going strong is Quora, a rapidly growing service that augments a regular Q&A system with social links between users. Despite its success, however, little is known about what drives Quora's growth, and how it continues to connect visitors and experts to the right questions as it grows. In this paper, we present results of a detailed analysis of Quora using measurements. We shed light on the impact of three different connection networks (or graphs) inside Quora, a graph connecting topics to users, a social graph connecting users, and a graph connecting related questions. Our results show that heterogeneity in the user and question graphs are significant contributors to the quality of Quora's knowledge base. One drives the attention and activity of users, and the other directs them to a small set of popular and interesting questions." ] }
1507.04760
2293291104
Automated estimation of the allocation of a driver's visual attention may be a critical component of future Advanced Driver Assistance Systems. In theory, vision-based tracking of the eye can provide a good estimate of gaze location. In practice, eye tracking from video is challenging because of sunglasses, eyeglass reflections, lighting conditions, occlusions, motion blur, and other factors. Estimation of head pose, on the other hand, is robust to many of these effects, but cannot provide as fine-grained of a resolution in localizing the gaze. However, for the purpose of keeping the driver safe, it is sufficient to partition gaze into regions. In this effort, we propose a system that extracts facial features and classifies their spatial configuration into six regions in real-time. Our proposed method achieves an average accuracy of 91.4 at an average decision rate of 11 Hz on a dataset of 50 drivers from an on-road study.
The algorithm in @cite_3 uses an ensemble of regression trees for super-real-time face alignment. Our face feature extraction algorithm drawn upon this method as it is built on a decade of progress on the face alignment problem (see @cite_3 for a survey of this literature). The key contribution of the algorithm is an iterative transform of the image to a normalized coordinate system based on the current estimate of the face shape. Also, to avoid the non-convex problem of initially matching a model of the shape to the image data, the assumption is made that the initial estimate of the shape can be found in a linear subspace.
{ "cite_N": [ "@cite_3" ], "mid": [ "2087681821" ], "abstract": [ "This paper addresses the problem of Face Alignment for a single image. We show how an ensemble of regression trees can be used to estimate the face's landmark positions directly from a sparse subset of pixel intensities, achieving super-realtime performance with high quality predictions. We present a general framework based on gradient boosting for learning an ensemble of regression trees that optimizes the sum of square error loss and naturally handles missing or partially labelled data. We show how using appropriate priors exploiting the structure of image data helps with efficient feature selection. Different regularization strategies and its importance to combat overfitting are also investigated. In addition, we analyse the effect of the quantity of training data on the accuracy of the predictions and explore the effect of data augmentation using synthesized data." ] }
1507.04760
2293291104
Automated estimation of the allocation of a driver's visual attention may be a critical component of future Advanced Driver Assistance Systems. In theory, vision-based tracking of the eye can provide a good estimate of gaze location. In practice, eye tracking from video is challenging because of sunglasses, eyeglass reflections, lighting conditions, occlusions, motion blur, and other factors. Estimation of head pose, on the other hand, is robust to many of these effects, but cannot provide as fine-grained of a resolution in localizing the gaze. However, for the purpose of keeping the driver safe, it is sufficient to partition gaze into regions. In this effort, we propose a system that extracts facial features and classifies their spatial configuration into six regions in real-time. Our proposed method achieves an average accuracy of 91.4 at an average decision rate of 11 Hz on a dataset of 50 drivers from an on-road study.
Head pose estimation has a long history in computer vision. Murphy-Chutorian and Trivedi @cite_5 describe 74 published and tested systems from the last two decades. Generally, each approach makes one of several assumptions that limit the general applicability of the system in driver state detection. These assumptions include: (1) the video is continuous, (2) initial pose of the subject is known, (3) there is a stereo vision system available, (4) the camera has frontal view of the face, (5) the head can only rotate on one axis, (6) the system only has to work for one person. While the development of a set of assumptions is often necessary for the classification of a large number of possible poses, our approach skips the head pose estimation step (i.e., the computation of a vector in 3D space modeling the orientation of the head) and goes straight from the detection of a facial features to a classification of gaze to one of six glance regions. We believe that such a classification set is sufficient for the in-vehicle environment where the overarching goal is to assess if the driver is distracted or inattentive to the driving context.
{ "cite_N": [ "@cite_5" ], "mid": [ "2149382413" ], "abstract": [ "The capacity to estimate the head pose of another person is a common human ability that presents a unique challenge for computer vision systems. Compared to face detection and recognition, which have been the primary foci of face-related vision research, identity-invariant head pose estimation has fewer rigorously evaluated systems or generic solutions. In this paper, we discuss the inherent difficulties in head pose estimation and present an organized survey describing the evolution of the field. Our discussion focuses on the advantages and disadvantages of each approach and spans 90 of the most innovative and characteristic papers that have been published on this topic. We compare these systems by focusing on their ability to estimate coarse and fine head pose, highlighting approaches that are well suited for unconstrained environments." ] }
1507.04760
2293291104
Automated estimation of the allocation of a driver's visual attention may be a critical component of future Advanced Driver Assistance Systems. In theory, vision-based tracking of the eye can provide a good estimate of gaze location. In practice, eye tracking from video is challenging because of sunglasses, eyeglass reflections, lighting conditions, occlusions, motion blur, and other factors. Estimation of head pose, on the other hand, is robust to many of these effects, but cannot provide as fine-grained of a resolution in localizing the gaze. However, for the purpose of keeping the driver safe, it is sufficient to partition gaze into regions. In this effort, we propose a system that extracts facial features and classifies their spatial configuration into six regions in real-time. Our proposed method achieves an average accuracy of 91.4 at an average decision rate of 11 Hz on a dataset of 50 drivers from an on-road study.
Video-based pupil detection and eye tracking approaches have been extensively studied. The main pattern recognition approaches combine one or more features (corneal reflection, distinct pupil shape in combination with edge-detection, characteristic light intensity of the pupil, and a 3D model of the eye) to derive an estimate of an individual's pupil, iris, or eye position @cite_9 . In practice, for many of the reasons discussed earlier, eye tracking in the vehicle context even for the experimental assessment of driver behavior is often inaccurate. Our approach focuses on the head as the proxy for classifying broad regions of eye movement to provide a mechanism for real-time driver state estimation while facilitating a more economical method of assessing driver behavior in experimental setting during design assessment and safety validation.
{ "cite_N": [ "@cite_9" ], "mid": [ "2108045700" ], "abstract": [ "Eye-gaze detection and tracking have been an active research field in the past years as it adds convenience to a variety of applications. It is considered a significant untraditional method of human computer interaction. Head movement detection has also received researchers' attention and interest as it has been found to be a simple and effective interaction method. Both technologies are considered the easiest alternative interface methods. They serve a wide range of severely disabled people who are left with minimal motor abilities. For both eye tracking and head movement detection, several different approaches have been proposed and used to implement different algorithms for these technologies. Despite the amount of research done on both technologies, researchers are still trying to find robust methods to use effectively in various applications. This paper presents a state-of-art survey for eye tracking and head movement detection methods proposed in the literature. Examples of different fields of applications for both technologies, such as human-computer interaction, driving assistance systems, and assistive technologies are also investigated." ] }
1507.03928
2951899404
Automatic query reformulation refers to rewriting a user's original query in order to improve the ranking of retrieval results compared to the original query. We present a general framework for automatic query reformulation based on discrete optimization. Our approach, referred to as pseudo-query reformulation, treats automatic query reformulation as a search problem over the graph of unweighted queries linked by minimal transformations (e.g. term additions, deletions). This framework allows us to test existing performance prediction methods as heuristics for the graph search process. We demonstrate the effectiveness of the approach on several publicly available datasets.
Kurland present several heuristics for iteratively refining a language model query by navigating document clusters in a retrieval system @cite_19 . The technique leverages specialized data structures storing document clusters derived from large scale corpus analysis. While related, the solution proposed by these authors violates assumptions in our problem definition. First, their solution assumes weighted language model style queries not supported by backends in our scenario. Second, their solution assumes access to the entire corpus as opposed to a search API.
{ "cite_N": [ "@cite_19" ], "mid": [ "2949069903" ], "abstract": [ "We present a novel approach to pseudo-feedback-based ad hoc retrieval that uses language models induced from both documents and clusters. First, we treat the pseudo-feedback documents produced in response to the original query as a set of pseudo-queries that themselves can serve as input to the retrieval process. Observing that the documents returned in response to the pseudo-queries can then act as pseudo-queries for subsequent rounds, we arrive at a formulation of pseudo-query-based retrieval as an iterative process. Experiments show that several concrete instantiations of this idea, when applied in conjunction with techniques designed to heighten precision, yield performance results rivaling those of a number of previously-proposed algorithms, including the standard language-modeling approach. The use of cluster-based language models is a key contributing factor to our algorithms' success." ] }
1507.03857
1922029667
This paper considers probabilistic estimation of a low-rank matrix from non-linear element-wise measurements of its elements. We derive the corresponding approximate message passing (AMP) algorithm and its state evolution. Relying on non-rigorous but standard assumptions motivated by statistical physics, we characterize the minimum mean squared error (MMSE) achievable information theoretically and with the AMP algorithm. Unlike in related problems of linear estimation, in the present setting the MMSE depends on the output channel only trough a single parameter - its Fisher information. We illustrate this striking finding by analysis of submatrix localization, and of detection of communities hidden in a dense stochastic block model. For this example we locate the computational and statistical boundaries that are not equal for rank larger than four.
As far as we know the only tools that provides asymptotically exact analysis of the minimal mean squared error of models -) is approximate message passing and state evolution as deployed in the present paper. For the output channel being additive Gaussian noise this was done previously in @cite_7 @cite_5 for rank @math with part of the results being fully rigorous, in @cite_22 for generic rank and without the state evolution, and in @cite_24 for general rank with the state evolution, but non-rigorously. The main contribution of this paper is the treatment of the case of general non-linear output channel @math .
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_22", "@cite_7" ], "mid": [ "1700527341", "2964282092", "2097538064", "2040969041" ], "abstract": [ "We study optimal estimation for sparse principal component analysis when the number of non-zero elements is small but on the same order as the dimension of the data. We employ approximate message passing (AMP) algorithm and its state evolution to analyze what is the information theoretically minimal mean-squared error and the one achieved by AMP in the limit of large sizes. For a special case of rank one and large enough density of non-zeros Deshpande and Montanari [1] proved that AMP is asymptotically optimal. We show that both for low density and for large rank the problem undergoes a series of phase transitions suggesting existence of a region of parameters where estimation is information theoretically possible, but AMP (and presumably every other polynomial algorithm) fails. The analysis of the large rank limit is particularly instructive.", "Sparse Principal Component Analysis (PCA) is a dimensionality reduction technique wherein one seeks a lowrank representation of a data matrix with additional sparsity constraints on the obtained representation. We consider two probabilistic formulations of sparse PCA: a spiked Wigner and spiked Wishart (or spiked covariance) model. We analyze an Approximate Message Passing (AMP) algorithm to estimate the underlying signal and show, in the high dimensional limit, that the AMP estimates are information-theoretically optimal. As an immediate corollary, our results demonstrate that the posterior expectation of the underlying signal, which is often intractable to compute, can be obtained using a polynomial-time scheme. Our results also effectively provide a single-letter characterization of the sparse PCA problem.", "We study the problem of reconstructing low-rank matrices from their noisy observations. We formulate the problem in the Bayesian framework, which allows us to exploit structural properties of matrices in addition to low-rankedness, such as sparsity. We propose an efficient approximate message passing algorithm, derived from the belief propagation algorithm, to perform the Bayesian inference for matrix reconstruction. We have also successfully applied the proposed algorithm to a clustering problem, by reformulating it as a low-rank matrix reconstruction problem with an additional structural property. Numerical experiments show that the proposed algorithm outperforms Lloyd's K-means algorithm.", "We consider the problem of estimating a rank-one matrix in Gaussian noise under a probabilistic model for the left and right factors of the matrix. The probabilistic model can impose constraints on the factors including sparsity and positivity that arise commonly in learning problems. We propose a simple iterative procedure that reduces the problem to a sequence of scalar estimation computations. The method is similar to approximate message passing techniques based on Gaussian approximations of loopy belief propagation that have been used recently in compressed sensing. Leveraging analysis methods by Bayati and Montanari, we show that the asymptotic behavior of the estimates from the proposed iterative procedure is described by a simple scalar equivalent model, where the distribution of the estimates is identical to certain scalar estimates of the variables in Gaussian noise. Moreover, the effective Gaussian noise level is described by a set of state evolution equations. The proposed method thus provides a computationally simple and general method for rank-one estimation problems with a precise analysis in certain high-dimensional settings." ] }
1507.03857
1922029667
This paper considers probabilistic estimation of a low-rank matrix from non-linear element-wise measurements of its elements. We derive the corresponding approximate message passing (AMP) algorithm and its state evolution. Relying on non-rigorous but standard assumptions motivated by statistical physics, we characterize the minimum mean squared error (MMSE) achievable information theoretically and with the AMP algorithm. Unlike in related problems of linear estimation, in the present setting the MMSE depends on the output channel only trough a single parameter - its Fisher information. We illustrate this striking finding by analysis of submatrix localization, and of detection of communities hidden in a dense stochastic block model. For this example we locate the computational and statistical boundaries that are not equal for rank larger than four.
Approximate message passing and state evolution for a generic output channel @math was derived previously in the context of linear estimation @cite_11 , and later in matrix factorization with @math @cite_19 . In both these cases the resulting equations (both the AMP and the state evolution) are considerably more involved than those for additive Gaussian noise. In the setting of low-rank models -) above the situation is remarkably simpler as the AMP algorithm stays the same up to a change of the matrix @math for the so-called Fisher score matrix @math that depends on the output channel and on @math element-wise, and the inverse of the Fisher information of the channel that we denote @math and plays a role of an effective noise variance. In the state evolution the situation is even simpler in the sense that only the effective value @math of the noise appears.
{ "cite_N": [ "@cite_19", "@cite_11" ], "mid": [ "1959879694", "2166670884" ], "abstract": [ "We analyze the matrix factorization problem. Given a noisy measurement of a product of two matrices, the problem is to estimate back the original matrices. It arises in many applications, such as dictionary learning, blind matrix calibration, sparse principal component analysis, blind source separation, low rank matrix completion, robust principal component analysis, or factor analysis. It is also important in machine learning: unsupervised representation learning can often be studied through matrix factorization. We use the tools of statistical mechanics—the cavity and replica methods—to analyze the achievability and computational tractability of the inference problems in the setting of Bayes-optimal inference, which amounts to assuming that the two matrices have random-independent elements generated from some known distribution, and this information is available to the inference algorithm. In this setting, we compute the minimal mean-squared-error achievable, in principle, in any computational time, and the error that can be achieved by an efficient approximate message passing algorithm. The computation is based on the asymptotic state-evolution analysis of the algorithm. The performance that our analysis predicts, both in terms of the achieved mean-squared-error and in terms of sample complexity, is extremely promising and motivating for a further development of the algorithm.", "We consider the estimation of a random vector observed through a linear transform followed by a componentwise probabilistic measurement channel. Although such linear mixing estimation problems are generally highly non-convex, Gaussian approximations of belief propagation (BP) have proven to be computationally attractive and highly effective in a range of applications. Recently, Bayati and Montanari have provided a rigorous and extremely general analysis of a large class of approximate message passing (AMP) algorithms that includes many Gaussian approximate BP methods. This paper extends their analysis to a larger class of algorithms to include what we call generalized AMP (G-AMP). G-AMP incorporates general (possibly non-AWGN) measurement channels. Similar to the AWGN output channel case, we show that the asymptotic behavior of the G-AMP algorithm under large i.i.d. Gaussian transform matrices is described by a simple set of state evolution (SE) equations. The general SE equations recover and extend several earlier results, including SE equations for approximate BP on general output channels by Guo and Wang." ] }
1507.03857
1922029667
This paper considers probabilistic estimation of a low-rank matrix from non-linear element-wise measurements of its elements. We derive the corresponding approximate message passing (AMP) algorithm and its state evolution. Relying on non-rigorous but standard assumptions motivated by statistical physics, we characterize the minimum mean squared error (MMSE) achievable information theoretically and with the AMP algorithm. Unlike in related problems of linear estimation, in the present setting the MMSE depends on the output channel only trough a single parameter - its Fisher information. We illustrate this striking finding by analysis of submatrix localization, and of detection of communities hidden in a dense stochastic block model. For this example we locate the computational and statistical boundaries that are not equal for rank larger than four.
Analogous universality with respect to the output channel was observed in @cite_6 (see e.g. their remark 2.5) in the study of detection of a small hidden clique with approximate message passing.
{ "cite_N": [ "@cite_6" ], "mid": [ "2962704928" ], "abstract": [ "Consider an Erdos---Renyi random graph in which each edge is present independently with probability @math 1 2, except for a subset @math CN of the vertices that form a clique (a completely connected subgraph). We consider the problem of identifying the clique, given a realization of such a random graph. The algorithm of (ANALCO. SIAM, pp 67---75, 2011) provably identifies the clique @math CN in linear time, provided @math |CN|?1.261N. Spectral methods can be shown to fail on cliques smaller than @math N. In this paper we describe a nearly linear-time algorithm that succeeds with high probability for @math |CN|?(1+?)N e for any @math ?>0. This is the first algorithm that provably improves over spectral methods. We further generalize the hidden clique problem to other background graphs (the standard case corresponding to the complete graph on @math N vertices). For large-girth regular graphs of degree @math (Δ+1) we prove that so-called local algorithms succeed if @math |CN|?(1+?)N eΔ and fail if @math |CN|≤(1-?)N eΔ." ] }
1507.03857
1922029667
This paper considers probabilistic estimation of a low-rank matrix from non-linear element-wise measurements of its elements. We derive the corresponding approximate message passing (AMP) algorithm and its state evolution. Relying on non-rigorous but standard assumptions motivated by statistical physics, we characterize the minimum mean squared error (MMSE) achievable information theoretically and with the AMP algorithm. Unlike in related problems of linear estimation, in the present setting the MMSE depends on the output channel only trough a single parameter - its Fisher information. We illustrate this striking finding by analysis of submatrix localization, and of detection of communities hidden in a dense stochastic block model. For this example we locate the computational and statistical boundaries that are not equal for rank larger than four.
Our results for the Bayes-optimal estimation error of community detection in dense stochastic block model are of independent interest. Analogous results were derived for the sparse case in @cite_15 . In the dense case only MSE-suboptimal spectral methods were evaluated @cite_25 . We also unveil a hard phase existing in this problem for rank @math , and becoming very wide for @math .
{ "cite_N": [ "@cite_15", "@cite_25" ], "mid": [ "2004531067", "2014259951" ], "abstract": [ "In this paper we extend our previous work on the stochastic block model, a commonly used generative model for social and biological networks, and the problem of inferring functional groups or communities from the topology of the network. We use the cavity method of statistical physics to obtain an asymptotically exact analysis of the phase diagram. We describe in detail properties of the detectability undetectability phase transition and the easy hard phase transition for the community detection problem. Our analysis translates naturally into a belief propagation algorithm for inferring the group memberships of the nodes in an optimal way, i.e., that maximizes the overlap with the underlying group memberships, and learning the underlying parameters of the block model. Finally, we apply the algorithm to two examples of real-world networks and discuss its performance.", "We study networks that display community structure -- groups of nodes within which connections are unusually dense. Using methods from random matrix theory, we calculate the spectra of such networks in the limit of large size, and hence demonstrate the presence of a phase transition in matrix methods for community detection, such as the popular modularity maximization method. The transition separates a regime in which such methods successfully detect the community structure from one in which the structure is present but is not detected. By comparing these results with recent analyses of maximum-likelihood methods we are able to show that spectral modularity maximization is an optimal detection method in the sense that no other method will succeed in the regime where the modularity method fails." ] }
1507.03857
1922029667
This paper considers probabilistic estimation of a low-rank matrix from non-linear element-wise measurements of its elements. We derive the corresponding approximate message passing (AMP) algorithm and its state evolution. Relying on non-rigorous but standard assumptions motivated by statistical physics, we characterize the minimum mean squared error (MMSE) achievable information theoretically and with the AMP algorithm. Unlike in related problems of linear estimation, in the present setting the MMSE depends on the output channel only trough a single parameter - its Fisher information. We illustrate this striking finding by analysis of submatrix localization, and of detection of communities hidden in a dense stochastic block model. For this example we locate the computational and statistical boundaries that are not equal for rank larger than four.
We also recently learned about independently ongoing work of @cite_3 who consider rank @math with Radamacher prior, and establish rigorously the relation between the stochastic block model with two groups and low-rank estimation with Gaussian channel.
{ "cite_N": [ "@cite_3" ], "mid": [ "2116061856" ], "abstract": [ "We develop an information-theoretic view of the stochastic block model, a popular statistical model for the large-scale structure of complex networks. A graph @math from such a model is generated by first assigning vertex labels at random from a finite alphabet, and then connecting vertices with edge probabilities depending on the labels of the endpoints. In the case of the symmetric two-group model, we establish an explicit single-letter' characterization of the per-vertex mutual information between the vertex labels and the graph. The explicit expression of the mutual information is intimately related to estimation-theoretic quantities, and --in particular-- reveals a phase transition at the critical point for community detection. Below the critical point the per-vertex mutual information is asymptotically the same as if edges were independent. Correspondingly, no algorithm can estimate the partition better than random guessing. Conversely, above the threshold, the per-vertex mutual information is strictly smaller than the independent-edges upper bound. In this regime there exists a procedure that estimates the vertex labels better than random guessing." ] }
1507.04215
1928328783
In this paper, we want to study the informative value of negative links in signed complex networks. For this purpose, we extract and analyze a collection of signed networks representing voting sessions of the European Parliament (EP). We first process some data collected by the Vote Watch Europe Website for the whole 7th term (2009-2014), by considering voting similarities between Members of the EP to define weighted signed links. We then apply a selection of community detection algorithms, designed to process only positive links, to these data. We also apply Parallel Iterative Local Search (Parallel ILS), an algorithm recently proposed to identify balanced partitions in signed networks. Our results show that, contrary to the conclusions of a previous study focusing on other data, the partitions detected by ignoring or considering the negative links are indeed remarkably different for these networks. The relevance of negative links for graph partitioning therefore is an open question which should be further explored.
In the complex networks field, works dedicated to signed networks focus only on the clustering problem, as defined by Davis @cite_28 . @all: I'm actually not so sure of that anymore, I must check again. Community detection involves taking link density into account, which is ont considered in the CC problem (cf. the article of Esmailian . Various methods were proposed for this purpose: evolutionary approaches @cite_38 @cite_45 @cite_24 @cite_17 , agent-based systems @cite_33 , matrix transformation @cite_11 , extensions of the Modularity measure @cite_26 @cite_21 @cite_3 @cite_13 @cite_8 , simulated annealing @cite_27 , spectral approaches @cite_1 @cite_35 @cite_36 , particle swarm optimization @cite_2 @cite_44 , and others. Some authors performed the same task on bipartite networks @cite_40 , while others relaxed the clustering problem in order to identify overlapping communities @cite_18 . Although the methods listed here were applied to networks representing very different systems, authors did not investigate the possibility that some alternative versions of the clustering problems were more appropriate to certain data.
{ "cite_N": [ "@cite_35", "@cite_36", "@cite_3", "@cite_44", "@cite_2", "@cite_38", "@cite_18", "@cite_8", "@cite_21", "@cite_17", "@cite_26", "@cite_28", "@cite_27", "@cite_40", "@cite_33", "@cite_1", "@cite_24", "@cite_45", "@cite_13", "@cite_11" ], "mid": [ "108936587", "", "1574010683", "2097464974", "2078412950", "2079599800", "1654053442", "2112461976", "2132322078", "75108075", "2055269041", "2156894402", "2143245657", "2047529619", "2097216034", "2069782692", "80006645", "2094013877", "1993968368", "2075989403" ], "abstract": [ "We study the application of spectral clustering, prediction and visualization methods to graphs with negatively weighted edges. We show that several characteristic matrices of graphs can be extended to graphs with positively and negatively weighted edges, giving signed spectral clustering methods, signed graph kernels and network visualization methods that apply to signed graphs. In particular, we review a signed variant of the graph Laplacian. We derive our results by considering random walks, graph clustering, graph drawing and electrical networks, showing that they all result in the same formalism for handling negatively weighted edges. We illustrate our methods using examples from social networks with negative edges and bipartite rating graphs.", "", "We present a reformulation of modularity that allows the analysis of the community structure in networks of correlated data. The modularity preserves the probabilistic semantics of the original definition even when the network is directed, weighted, signed, and has self-loops. This is the most general condition one can find in the study of any network, in particular those defined from correlated data. We apply our results to a real network of correlated data between stores in the city of Lyon (France).", "The field of complex network clustering has been very active in the past several years. In this paper, a discrete framework of the particle swarm optimization algorithm is proposed. Based on the proposed discrete framework, a multiobjective discrete particle swarm optimization algorithm is proposed to solve the network clustering problem. The decomposition mechanism is adopted. A problem-specific population initialization method based on label propagation and a turbulence operator are introduced. In the proposed method, two evaluation objectives termed as kernel k-means and ratio cut are to be minimized. However, the two objectives can only be used to handle unsigned networks. In order to deal with signed networks, they have been extended to the signed version. The clustering performances of the proposed algorithm have been validated on signed networks and unsigned networks. Extensive experimental studies compared with ten state-of-the-art approaches prove that the proposed algorithm is effective and promising.", "Modern science of networks has facilitated us with enormous convenience to the understanding of complex systems. Community structure is believed to be one of the notable features of complex networks representing real complicated systems. Very often, uncovering community structures in networks can be regarded as an optimization problem, thus, many evolutionary algorithms based approaches have been put forward. Particle swarm optimization (PSO) is an artificial intelligent algorithm originated from social behavior such as birds flocking and fish schooling. PSO has been proved to be an effective optimization technique. However, PSO was originally designed for continuous optimization which confounds its applications to discrete contexts. In this paper, a novel discrete PSO algorithm is suggested for identifying community structures in signed networks. In the suggested method, particles' status has been redesigned in discrete form so as to make PSO proper for discrete scenarios, and particles' updating rules have been reformulated by making use of the topology of the signed network. Extensive experiments compared with three state-of-the-art approaches on both synthetic and real-world signed networks demonstrate that the proposed method is effective and promising.", "To detect communities in signed networks consisting of both positive and negative links, two new evolutionary algorithms (EAs) and two new memetic algorithms (MAs) are proposed and compared. Furthermore, two measures, namely the improved modularity Q and the improved modularity density D-value, are used as the objective functions. The improved measures not only preserve all properties of the original ones, but also have the ability of dealing with negative links. Moreover, D-value can also control the partition to different resolutions. To fully investigate the performance of these four algorithms and the two objective functions, benchmark social networks and various large-scale randomly generated signed networks are used in the experiments. The experimental results not only show the capability and high efficiency of the four algorithms in successfully detecting communities from signed networks, but also indicate that the two MAs outperform the two EAs in terms of the solution quality and the computational cost. Moreover, by tuning the parameter in D-value, the four algorithms have the multi-resolution ability.", "Complex networks considering both positive and negative links have gained considerable attention during the past several years. Community detection is one of the main challenges for complex network analysis. Most of the existing algorithms for community detection in a signed network aim at providing a hard-partition of the network where any node should belong to a community or not. However, they cannot detect overlapping communities where a node is allowed to belong to multiple communities. The overlapping communities widely exist in many real-world networks. In this paper, we propose a signed probabilistic mixture (SPM) model for overlapping community detection in signed networks. Compared with the existing models, the advantages of our methodology are (i) providing soft-partition solutions for signed networks; (ii) providing soft memberships of nodes. Experiments on a number of signed networks show that our SPM model: (i) can identify assortative structures or disassortative structures as the same as other state-of-the-art models; (ii) can detect overlapping communities; (iii) outperforms other state-of-the-art models at shedding light on the community detection in synthetic signed networks.", "Detecting communities in complex networks accurately is a prime challenge, preceding further analyses of network characteristics and dynamics. Until now, community detection took into account only positively valued links, while many actual networks also feature negative links. We extend an existing Potts model to incorporate negative links as well, resulting in a method similar to the clustering of signed graphs, as dealt with in social balance theory, but more general. To illustrate our method, we applied it to a network of international alliances and disputes. Using data from 1993-2001, it turns out that the world can be divided into six power blocs similar to Huntington's civilizations, with some notable exceptions.", "Social life coalesces into communities through cooperation and conflict. As a case in point, Shwed and Bearman (2010) studied consensus and contention in scientific communities. They used a sophisticated modularity method to detect communities on the basis of scientific citations, which they then interpreted as directed positive network ties. They assumed that a lack of citations implies disagreement. Some scientific citations, however, are contentious and should therefore be represented by negative ties, like conflicting relations in general. After expanding the modularity method to incorporate negative ties, we show that a small proportion of negative ties, commonly present in science, is sufficient to significantly alter the community structure. In addition, our research suggests that without distinguishing negative ties, scientific communities actually represent specialized subfields, not contentious groups. Finally, we cast doubt on the assumption that lack of cites would signal disagreement. To show the general importance of discerning negative ties for understanding conflict and its impact on communities, we also analyze a public debate.", "In this paper, we propose a method for detecting communities from signed social networks with both positive and negative weights by modeling the problem as a multi-objective problem. In the experiments, both real world and synthetic signed networks whose size ranges from 100 to 1200 nodes are used to validate the performance of the new algorithm. A comparison is also made between the new algorithm and an effective existing algorithm, namely FEC. The experimental results show that our algorithm obtains a good performance on both real world and synthetic data, and outperforms FEC clearly.", "Community detection in signed complex networks is a challenging research problem aiming at finding groups of entities having positive connections within the same cluster and negative relationships between different clusters. Most of the proposed approaches have been developed for networks having only positive edges. In this paper we propose a multiobjective approach to detect communities in signed networks. The method partitions a network in groups of nodes such that two objectives are contemporarily optimized. The former is that the partitioning should have dense positive intra-connections and sparse negative interconnections, the latter is that it should have as few as possible negative intra-connections and positive inter-connections. We show that the concepts of signed modularity and frustration fulfill these objectives, and that the maximization of signed modularity and the minimization of frustration allow to obtain very good solutions to the problem. An extensive set of experiments on both real-life and synthetic signed networks shows the efficacy of the approach.", "", "—We propose a framework for discovery of collaborative community structure in Wiki-based knowledge repositories based on raw-content generation analysis. We leverage topic modelling in order to capture agreement and opposition of contributors and analyze these multi-modal relations to map communities in the contributor base. The key steps of our approach include (i) modeling of pair wise variable-strength contributor interactions that can be both positive and negative, (ii) synthesis of a global network incorporating all pair wise interactions, and (iii) detection and analysis of community structure encoded in such networks. The global community discovery algorithm we propose outperforms existing alternatives in identifying coherent clusters according to objective optimality criteria. Analysis of the discovered community structure reveals coalitions of common interest editors who back each other in promoting some topics and collectively oppose other coalitions or single authors. We couple contributor interactions with content evolution and reveal the global picture of opposing themes within the self-regulated community base for both controversial and featured articles in Wikipedia.", "Structural balance theory forms the foundation for a generalized blockmodel method useful for delineating the structure of signed social one-mode networks for social actors (for example, people or nations). Heider's unit formation relation was dropped. We re-examine structural balance by formulating Heider's unit formation relations as signed two-mode data. Just as generalized blockmodeling has been extended to analyze two-mode unsigned data, we extend it to analyze signed two-mode network data and provide a formalization of the extension. The blockmodel structure for signed two-mode networks has positive and negative blocks, defined in terms of different partitions of rows and columns. These signed blocks can be located anywhere in the block model. We provide a motivating example and then use the new blockmodel type to delineate the voting patterns of the Supreme Court justices for all of their nonunanimous decisions for the 2006–07 term. Interpretations are presented together with a statement of further...", "Many complex systems in the real world can be modeled as signed social networks that contain both positive and negative relations. Algorithms for mining social networks have been developed in the past; however, most of them were designed primarily for networks containing only positive relations and, thus, are not suitable for signed networks. In this work, we propose a new algorithm, called FEC, to mine signed social networks where both positive within-group relations and negative between-group relations are dense. FEC considers both the sign and the density of relations as the clustering attributes, making it effective for not only signed networks but also conventional social networks including only positive relations. Also, FEC adopts an agent-based heuristic that makes the algorithm efficient (in linear time with respect to the size of a network) and capable of giving nearly optimal solutions. FEC depends on only one parameter whose value can easily be set and requires no prior knowledge on hidden community structures. The effectiveness and efficacy of FEC have been demonstrated through a set of rigorous experiments involving both benchmark and randomly generated signed networks.", "Discussion based websites like Epinions.com and Slashdot.com allow users to identify both friends and foes. Such networks are called Signed Social Networks and mining communities of like-minded users from these networks has potential value. We extend existing community detection algorithms that work only on unsigned networks to be applicable to signed networks. In particular, we develop a spectral approach augmented with iterative optimization. We use our algorithms to study both communities and structural balance. Our results indicate that modularity based communities are distinct from structurally balanced communities.", "Community structure is an important topological property of network. Being able to discover it can provide invaluable help in exploiting and understanding complex networks. Although many algorithms have been developed to complete this task, they all have advantages and limitations. So the issue of how to detect communities in networks quickly and correctly remains an open challenge. Distinct from the existing works, this paper studies the community structure from the view of network evolution and presents a self-organizing network evolving algorithm for mining communities hidden in complex networks. Compared with the existing algorithm, our approach has three distinct features. First, it has a good classification capability and especially works well with the networks without well-defined community structures. Second, it requires no prior knowledge and is insensitive to the build-in parameters. Finally, it is suitable for not only positive networks but also singed networks containing both positive and negative weights.", "Various types of social relationships, such as friends and foes, can be represented as signed social networks (SNs) that contain both positive and negative links. Although many community detection (CD) algorithms have been proposed, most of them were designed primarily for networks containing only positive links. Thus, it is important to design CD algorithms which can handle large-scale SNs. To this purpose, we first extend the original similarity to the signed similarity based on the social balance theory. Then, based on the signed similarity and the natural contradiction between positive and negative links, two objective functions are designed to model the problem of detecting communities in SNs as a multiobjective problem. Afterward, we propose a multiobjective evolutionary algorithm, called MEAsSN. In MEAs-SN, to overcome the defects of direct and indirect representations for communities, a direct and indirect combined representation is designed. Attributing to this representation, MEAs-SN can switch between different representations during the evolutionary process. As a result, MEAs-SN can benefit from both representations. Moreover, owing to this representation, MEAs-SN can also detect overlapping communities directly. In the experiments, both benchmark problems and large-scale synthetic networks generated by various parameter settings are used to validate the performance of MEAs-SN. The experimental results show the effectiveness and efficacy of MEAs-SN on networks with 1000, 5000, and 10000 nodes and also in various noisy situations. A thorough comparison is also made between MEAs-SN and three existing algorithms, and the results show that MEAs-SN outperforms other algorithms.", "We study the community structure of networks representing voting on resolutions in the United Nations General Assembly. We construct networks from the voting records of the separate annual sessions between 1946 and 2008 in three different ways: (1) by considering voting similarities as weighted unipartite networks; (2) by considering voting similarities as weighted, signed unipartite networks; and (3) by examining signed bipartite networks in which countries are connected to resolutions. For each formulation, we detect communities by optimizing network modularity using an appropriate null model. We compare and contrast the results that we obtain for these three different network representations. We thereby illustrate the need to consider multiple resolution parameters and explore the effectiveness of each network representation for identifying voting groups amidst the large amount of agreement typical in General Assembly votes.", "Signed network is an important kind of complex network, which includes both positive relations and negative relations. Communities of a signed network are defined as the groups of vertices, within which positive relations are dense and between which negative relations are also dense. Being able to identify communities of signed networks is helpful for analysis of such networks. Hitherto many algorithms for detecting network communities have been developed. However, most of them are designed exclusively for the networks including only positive relations and are not suitable for signed networks. So the problem of mining communities of signed networks quickly and correctly has not been solved satisfactorily. In this paper, we propose a heuristic algorithm to address this issue. Compared with major existing methods, our approach has three distinct features. First, it is very fast with a roughly linear time with respect to network size. Second, it exhibits a good clustering capability and especially can work well with complex networks without well-defined community structures. Finally, it is insensitive to its built-in parameters and requires no prior knowledge." ] }
1507.04215
1928328783
In this paper, we want to study the informative value of negative links in signed complex networks. For this purpose, we extract and analyze a collection of signed networks representing voting sessions of the European Parliament (EP). We first process some data collected by the Vote Watch Europe Website for the whole 7th term (2009-2014), by considering voting similarities between Members of the EP to define weighted signed links. We then apply a selection of community detection algorithms, designed to process only positive links, to these data. We also apply Parallel Iterative Local Search (Parallel ILS), an algorithm recently proposed to identify balanced partitions in signed networks. Our results show that, contrary to the conclusions of a previous study focusing on other data, the partitions detected by ignoring or considering the negative links are indeed remarkably different for these networks. The relevance of negative links for graph partitioning therefore is an open question which should be further explored.
Few works tried to compare the CC and community detection approaches. As mentioned in the introduction, Esmailian showed that, in certain cases, partitions estimated in signed networks by community detection methods, i.e. based only on the positive links, can be highly balanced @cite_30 . However, this work was conducted only on two networks of self-declared social interaction networks (Epinions and Slashdot), and using a single community detection method (InfoMap @cite_31 ). Moreover, they did not compare their results to partitions detected by algorithms designed to solve the CC problem. We investigate if this statement also holds for other real-world networks and community detection methods, and how these compare to results obtained with CC methods.
{ "cite_N": [ "@cite_30", "@cite_31" ], "mid": [ "2019483176", "2164998314" ], "abstract": [ "A class of networks are those with both positive and negative links. In this manuscript, we studied the interplay between positive and negative ties on mesoscopic level of these networks, i.e., their community structure. A community is considered as a tightly interconnected group of actors; therefore, it does not borrow any assumption from balance theory and merely uses the well-known assumption in the community detection literature. We found that if one detects the communities based on only positive relations (by ignoring the negative ones), the majority of negative relations are already placed between the communities. In other words, negative ties do not have a major role in community formation of signed networks. Moreover, regarding the internal negative ties, we proved that most unbalanced communities are maximally balanced, and hence they cannot be partitioned into k nonempty sub-clusters with higher balancedness (k≥2). Furthermore, we showed that although the mediator triad ++- (hostile-mediator-hostile) is underrepresented, it constitutes a considerable portion of triadic relations among communities. Hence, mediator triads should not be ignored by community detection and clustering algorithms. As a result, if one uses a clustering algorithm that operates merely based on social balance, mesoscopic structure of signed networks significantly remains hidden.", "To comprehend the multipartite organization of large-scale biological and social systems, we introduce an information theoretic approach that reveals community structure in weighted and directed networks. We use the probability flow of random walks on a network as a proxy for information flows in the real system and decompose the network into modules by compressing a description of the probability flow. The result is a map that both simplifies and highlights the regularities in the structure and their relationships. We illustrate the method by making a map of scientific communication as captured in the citation patterns of >6,000 journals. We discover a multicentric organization with fields that vary dramatically in size and degree of integration into the network of science. Along the backbone of the network—including physics, chemistry, molecular biology, and medicine—information flows bidirectionally, but the map reveals a directional pattern of citation from the applied fields to the basic sciences." ] }
1507.03811
2953152698
In this paper, a face emotion is considered as the result of the composition of multiple concurrent signals, each corresponding to the movements of a specific facial muscle. These concurrent signals are represented by means of a set of multi-scale appearance features that might be correlated with one or more concurrent signals. The extraction of these appearance features from a sequence of face images yields to a set of time series. This paper proposes to use the dynamics regulating each appearance feature time series to recognize among different face emotions. To this purpose, an ensemble of Hankel matrices corresponding to the extracted time series is used for emotion classification within a framework that combines nearest neighbor and a majority vote schema. Experimental results on a public available dataset shows that the adopted representation is promising and yields state-of-the-art accuracy in emotion classification.
Face detection @cite_13 , face recognition @cite_16 , @cite_26 and facial expression analysis @cite_6 have been deeply studied in past years, resulting in a vast literature reviewed in @cite_11 , @cite_18 . In this section, we focus on works that embed the temporal structure of the face image sequence in the feature representation or in the emotion model.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_6", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "", "2031420145", "2033773055", "1989702938", "2137401668", "2156503193" ], "abstract": [ "", "Due to the widespread use of cameras, it is very common to collect thousands of personal photos. A proper organization is needed to make the collection usable and to enable an easy photo retrieval. In this paper, we present a method to organize personal photo collections based on ''who'' is in the picture. Our method consists in detecting the faces in the photo sequence and arranging them in groups corresponding to the probable identities. This problem can be conveniently modeled as a multi-target visual tracking where a set of on-line trained classifiers is used to represent the identity models. In contrast to other works where clustering methods are used, our method relies on a probabilistic framework; it does not require any prior information about the number of different identities in the photo album. To enable future comparison, we present experimental results on a public dataset and on a photo collection generated from a public face dataset.", "Over the last decade, automatic facial expression analysis has become an active research area that finds potential applications in areas such as more engaging human-computer interfaces, talking heads, image retrieval and human emotion analysis. Facial expressions reflect not only emotions, but other mental activities, social interaction and physiological signals. In this survey we introduce the most prominent automatic facial expression analysis methods and systems presented in the literature. Facial motion and deformation extraction approaches as well as classification methods are discussed with respect to issues such as face normalization, facial expression dynamics and facial expression intensity, but also with regard to their robustness towards environmental changes.", "As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research. Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and or pose remains a largely unsolved problem. In other words, current systems are still far away from the capability of the human perception system.This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the studies of machine recognition of faces. To provide a comprehensive survey, we not only categorize existing recognition techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as psychophysical studies, system evaluation, and issues of illumination and pose variation are covered.", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.", "Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology." ] }
1507.03811
2953152698
In this paper, a face emotion is considered as the result of the composition of multiple concurrent signals, each corresponding to the movements of a specific facial muscle. These concurrent signals are represented by means of a set of multi-scale appearance features that might be correlated with one or more concurrent signals. The extraction of these appearance features from a sequence of face images yields to a set of time series. This paper proposes to use the dynamics regulating each appearance feature time series to recognize among different face emotions. To this purpose, an ensemble of Hankel matrices corresponding to the extracted time series is used for emotion classification within a framework that combines nearest neighbor and a majority vote schema. Experimental results on a public available dataset shows that the adopted representation is promising and yields state-of-the-art accuracy in emotion classification.
Works such as @cite_22 , @cite_21 use landmarks located on face parts such as eyes, eyebrows, nose and mouth to describe an emotion. In @cite_22 , a Constrained Local Model (CLM) is used to estimate facial landmarks and extract a sparse representation of corresponding image patches. Emotion classification is performed by least-square SVM. @cite_21 propose to use Interval Temporal Bayesian Network (ITBN) to capture the spatial and temporal relations among the primitive facial events. Hankel matrices have been already adopted for action recognition in @cite_25 , which adopts a Hankel matrix-based bag-of-words approach, and in @cite_31 , which models an action as a sequence of Hankel matrices and uses a set of HMM trained in a discriminative way to model the switching between LTI systems. @cite_28 , we have showed how the dynamics of tracked facial landmarks can be modeled by means of Hankel matrices and can be used for facial expression analysis.
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_21", "@cite_31", "@cite_25" ], "mid": [ "2083261637", "2126437449", "1984354005", "", "1984219317" ], "abstract": [ "Most work in automatic facial expression analysis seeks to detect discrete facial actions. Yet, the meaning and function of facial actions often depends in part on their intensity. We propose a part-based, sparse representation for automated measurement of continuous variation in AU intensity. We evaluated its effectiveness in two publically available databases, CK+ and the soon to be released Binghamton high-resolution spontaneous 3D dyadic facial expression database. The former consists of posed facial expressions and ordinal level intensity (absent, low, and high). The latter consists of spontaneous facial expression in response to diverse, well-validated emotion inductions, and 6 ordinal levels of AU intensity. In a preliminary test, we started from discrete emotion labels and ordinal-scale intensity annotation in the CK+ dataset. The algorithm achieved state-of-the-art performance. These preliminary results supported the utility of the part-based, sparse representation. Second, we applied the algorithm to the more demanding task of continuous AU intensity estimation in spontaneous facial behavior in the Binghamton database. Manual 6-point ordinal coding and continuous measurement were highly consistent. Visual analysis of the overlay of continuous measurement by the algorithm and manual ordinal coding strongly supported the representational power of the proposed method to smoothly interpolate across the full range of AU intensity.", "This paper proposes a new approach to model the temporal dynamics of a sequence of facial expressions. To this purpose, a sequence of Face Image Descriptors (FID) is regarded as the output of a Linear Time Invariant (LTI) system. The temporal dynamics of such sequence of descriptors are represented by means of a Hankel matrix. The paper presents different strategies to compute dynamics-based representation of a sequence of FID, and reports classification accuracy values of the proposed representations within different standard classification frameworks. The representations have been validated in two very challenging application domains: emotion recognition and pain detection. Experiments on two publicly available benchmarks and comparison with state-of-the-art approaches demonstrate that the dynamics-based FID representation attains competitive performance when off-the-shelf classification tools are adopted.", "Spatial-temporal relations among facial muscles carry crucial information about facial expressions yet have not been thoroughly exploited. One contributing factor for this is the limited ability of the current dynamic models in capturing complex spatial and temporal relations. Existing dynamic models can only capture simple local temporal relations among sequential events, or lack the ability for incorporating uncertainties. To overcome these limitations and take full advantage of the spatio-temporal information, we propose to model the facial expression as a complex activity that consists of temporally overlapping or sequential primitive facial events. We further propose the Interval Temporal Bayesian Network to capture these complex temporal relations among primitive facial events for facial expression modeling and recognition. Experimental results on benchmark databases demonstrate the feasibility of the proposed approach in recognizing facial expressions based purely on spatio-temporal relations among facial muscles, as well as its advantage over the existing methods.", "", "Human activity recognition is central to many practical applications, ranging from visual surveillance to gaming interfacing. Most approaches addressing this problem are based on localized spatio-temporal features that can vary significantly when the viewpoint changes. As a result, their performances rapidly deteriorate as the difference between the viewpoints of the training and testing data increases. In this paper, we introduce a new type of feature, the “Hankelet” that captures dynamic properties of short tracklets. While Hankelets do not carry any spatial information, they bring invariant properties to changes in viewpoint that allow for robust cross-view activity recognition, i.e. when actions are recognized using a classifier trained on data from a different viewpoint. Our experiments on the IXMAS dataset show that using Hanklets improves the state of the art performance by over 20 ." ] }
1507.03811
2953152698
In this paper, a face emotion is considered as the result of the composition of multiple concurrent signals, each corresponding to the movements of a specific facial muscle. These concurrent signals are represented by means of a set of multi-scale appearance features that might be correlated with one or more concurrent signals. The extraction of these appearance features from a sequence of face images yields to a set of time series. This paper proposes to use the dynamics regulating each appearance feature time series to recognize among different face emotions. To this purpose, an ensemble of Hankel matrices corresponding to the extracted time series is used for emotion classification within a framework that combines nearest neighbor and a majority vote schema. Experimental results on a public available dataset shows that the adopted representation is promising and yields state-of-the-art accuracy in emotion classification.
Whilst it is possible to obtain a reasonably accurate estimate of the face region @cite_13 , getting a reliable estimation of facial landmarks is still an open problem despite the remarkable progress described in @cite_12 , @cite_9 . The adoption of appearance feature extracted from the detected face region to describe an emotion, as done indeed in @cite_2 , @cite_30 , @cite_11 , @cite_18 , might be a convenient choice. Therefore, in this paper we adopt appearance features to represent a face expression. In contrast to @cite_28 , we do not model landmark trajectories but we use an ensemble of Hankel matrices to describe the dynamics of sequences of appearance features computed at multiple spatial scales. We demonstrate that, without an accurate estimation of facial landmarks, our novel representation can achieve state-of-the-art accuracy in emotion recognition.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_28", "@cite_9", "@cite_2", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2065926495", "", "2126437449", "2047508432", "1964762821", "2137401668", "2152826865", "2156503193" ], "abstract": [ "In this paper, we propose a novel Bayesian approach to modelling temporal transitions of facial expressions represented in a manifold, with the aim of dynamical facial expression recognition in image sequences. A generalised expression manifold is derived by embedding image data into a low dimensional subspace using Supervised Locality Preserving Projections. A Bayesian temporal model is formulated to capture the dynamic facial expression transition in the manifold. Our experimental results demonstrate the advantages gained from exploiting explicitly temporal information in expression image sequences resulting in both superior recognition rates and improved robustness against static frame-based recognition methods.", "", "This paper proposes a new approach to model the temporal dynamics of a sequence of facial expressions. To this purpose, a sequence of Face Image Descriptors (FID) is regarded as the output of a Linear Time Invariant (LTI) system. The temporal dynamics of such sequence of descriptors are represented by means of a Hankel matrix. The paper presents different strategies to compute dynamics-based representation of a sequence of FID, and reports classification accuracy values of the proposed representations within different standard classification frameworks. The representations have been validated in two very challenging application domains: emotion recognition and pain detection. Experiments on two publicly available benchmarks and comparison with state-of-the-art approaches demonstrate that the dynamics-based FID representation attains competitive performance when off-the-shelf classification tools are adopted.", "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "This paper proposes a novel local feature descriptor, local directional number pattern (LDN), for face analysis, i.e., face and expression recognition. LDN encodes the directional information of the face's textures (i.e., the texture's structure) in a compact way, producing a more discriminative code than current methods. We compute the structure of each micro-pattern with the aid of a compass mask that extracts directional information, and we encode such information using the prominent direction indices (directional numbers) and sign-which allows us to distinguish among similar structural patterns that have different intensity transitions. We divide the face into several regions, and extract the distribution of the LDN features from them. Then, we concatenate these features into a feature vector, and we use it as a face descriptor. We perform several experiments in which our descriptor performs consistently under illumination, noise, expression, and time lapse variations. Moreover, we test our descriptor with different masks to analyze its performance in different face analysis tasks.", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.", "We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.", "Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology." ] }
1507.03922
913375748
Gelfond and Zhang recently proposed a new stable model semantics based on Vicious Circle Principle in order to improve the interpretation of logic programs with aggregates. The paper focuses on this proposal, and analyzes the complexity of both coherence testing and cautious reasoning under the new semantics. Some surprising results highlight similarities and differences versus mainstream stable model semantics for aggregates. Moreover, the paper reports on the design of compilation techniques for implementing the new semantics on top of existing ASP solvers, which eventually lead to realize a prototype system that allows for experimenting with Gelfond-Zhang's aggregates. To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 2015.
The challenge of extending stable model semantics with aggregate constructs has been investigated quite intensively in the previous decade. Among the many proposals, F-stable model semantics @cite_10 @cite_6 is of particular interest as many ASP solvers are currently based on this semantics @cite_4 @cite_28 . Actually, the definition provided in is slightly different than those in @cite_10 @cite_6 . In particular, the language considered in @cite_10 has a broader syntax allowing for arbitrary nesting of propositional formulas. The language considered in @cite_6 , instead, does not allow explicitly the use of double negation, which however can be simulated by means of auxiliary atoms. For example, in @cite_6 a rule @math must be modeled by using a fresh atom @math and the following subprogram: @math @math . On the other hand, negated aggregates are permitted in @cite_6 , while they are forbidden in this paper. Actually, programs with negated aggregates are those for which @cite_10 and @cite_6 disagree. As a final remark, the reduct of @cite_6 does not remove negated literals from bodies, which however are necessarily true in all counter-models because double negation is not allowed in the syntax considered by @cite_6 .
{ "cite_N": [ "@cite_28", "@cite_4", "@cite_10", "@cite_6" ], "mid": [ "2106500031", "2049791401", "2029702617", "2064864283" ], "abstract": [ "Disjunctive logic programming (DLP) is a very expressive formalism. It allows for expressing every property of finite structures that is decidable in the complexity class ΣP2(=NPNP). Despite this high expressiveness, there are some simple properties, often arising in real-world applications, which cannot be encoded in a simple and natural manner. Especially properties that require the use of arithmetic operators (like sum, times, or count) on a set or multiset of elements, which satisfy some conditions, cannot be naturally expressed in classic DLP. To overcome this deficiency, we extend DLP by aggregate functions in a conservative way. In particular, we avoid the introduction of constructs with disputed semantics, by requiring aggregates to be stratified. We formally define the semantics of the extended language (called ), and illustrate how it can be profitably used for representing knowledge. Furthermore, we analyze the computational complexity of , showing that the addition of aggregates does not bring a higher cost in that respect. Finally, we provide an implementation of in DLVa state-of-the-art DLP systemand report on experiments which confirm the usefulness of the proposed extension also for the efficiency of computation.", "We introduce an approach to computing answer sets of logic programs, based on concepts successfully applied in Satisfiability (SAT) checking. The idea is to view inferences in Answer Set Programming (ASP) as unit propagation on nogoods. This provides us with a uniform constraint-based framework capturing diverse inferences encountered in ASP solving. Moreover, our approach allows us to apply advanced solving techniques from the area of SAT. As a result, we present the first full-fledged algorithmic framework for native conflict-driven ASP solving. Our approach is implemented in the ASP solver clasp that has demonstrated its competitiveness and versatility by winning first places at various solver contests.", "Answer set programming (ASP) is a logic programming paradigm that can be used to solve complex combinatorial search problems. Aggregates are an ASP construct that plays an important role in many applications. Defining a satisfactory semantics of aggregates turned out to be a difficult problem, and in this article we propose a new approach, based on an analogy between aggregates and propositional connectives. First we extend the definition of an answer set stable model to cover arbitrary propositional theories; then we define aggregates on top of them both as primitive constructs and as abbreviations for formulas. Our definition of an aggregate combines expressiveness and simplicity, and it inherits many theorems about programs with nested expressions, such as theorems about strong equivalence and splitting.", "The addition of aggregates has been one of the most relevant enhancements to the language of answer set programming (ASP). They strengthen the modelling power of ASP in terms of natural and concise problem representations. Previous semantic definitions typically agree in the case of non-recursive aggregates, but the picture is less clear for aggregates involved in recursion. Some proposals explicitly avoid recursive aggregates, most others differ, and many of them do not satisfy desirable criteria, such as minimality or coincidence with answer sets in the aggregate-free case. In this paper we define a semantics for programs with arbitrary aggregates (including monotone, antimonotone, and nonmonotone aggregates) in the full ASP language allowing also for disjunction in the head (disjunctive logic programming - DLP). This semantics is a genuine generalization of the answer set semantics for DLP, it is defined by a natural variant of the Gelfond-Lifschitz transformation, and treats aggregate and non-aggregate literals in a uniform way. This novel transformation is interesting per se also in the aggregate-free case, since it is simpler than the original transformation and does not need to differentiate between positive and negative literals. We prove that our semantics guarantees the minimality (and therefore the incomparability) of answer sets, and we demonstrate that it coincides with the standard answer set semantics on aggregate-free programs. Moreover, we carry out an in-depth study of the computational complexity of the language. The analysis pays particular attention to the impact of syntactical restrictions on programs in the form of limited use of aggregates, disjunction, and negation. While the addition of aggregates does not affect the complexity of the full DLP language, it turns out that their presence does increase the complexity of normal (i.e., non-disjunctive) ASP programs up to the second level of the polynomial hierarchy. However, we show that there are large classes of aggregates the addition of which does not cause any complexity gap even for normal programs, including the fragment allowing for arbitrary monotone, arbitrary antimonotone, and stratified (i.e., non-recursive) nonmonotone aggregates. The analysis provides some useful indications on the possibility to implement aggregates in existing reasoning engines." ] }
1507.03922
913375748
Gelfond and Zhang recently proposed a new stable model semantics based on Vicious Circle Principle in order to improve the interpretation of logic programs with aggregates. The paper focuses on this proposal, and analyzes the complexity of both coherence testing and cautious reasoning under the new semantics. Some surprising results highlight similarities and differences versus mainstream stable model semantics for aggregates. Moreover, the paper reports on the design of compilation techniques for implementing the new semantics on top of existing ASP solvers, which eventually lead to realize a prototype system that allows for experimenting with Gelfond-Zhang's aggregates. To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 2015.
Other relevant stable model semantics for logic programs with aggregates are reported in @cite_3 @cite_9 for disjunction-free programs, recently extended to the disjunctive case in @cite_24 . In these semantics the stability check is not given in terms of minimality of the model for the program reduct but obtained by means of a fixpoint operator similar to immediate consequence, and the following relation holds in general: stable models of @cite_24 are a selection of F-stable models, and they coincide up to ASP( @math ,M,C), which is also the complexity boundary between the first and second level of the polynomial hierarchy for F-stable model semantics @cite_17 . Finally, a more recent proposal is G-stable model semantics @cite_8 , whose relation with other semantics has been highlighted by @cite_20 in the disjunction-free case: G-stable models are F-stable models, but the converse is not always true.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_3", "@cite_24", "@cite_20", "@cite_17" ], "mid": [ "2059245949", "2069156007", "", "2050890691", "1190126325", "50549667" ], "abstract": [ "The paper presents a knowledge representation language @math which extends ASP with aggregates. The goal is to have a language based on simple syntax and clear intuitive and mathematical semantics. We give some properties of @math , an algorithm for computing its answer sets, and comparison with other approaches.", "This technical note describes a monotone and continuous fixpoint operator to compute the answer sets of programs with aggregates. The fixpoint operator relies on the notion of aggregate solution. Under certain conditions, this operator behaves identically to the three-valued immediate consequence operator PaggrP for aggregate programs, independently proposed in Pelov (2004) and (2004). This operator allows us to closely tie the computational complexity of the answer set checking and answer sets existence problems to the cost of checking a solution of the aggregates in the program. Finally, we relate the semantics described by the operator to other proposals for logic programming with aggregates.", "", "The answer set semantics presented by [27] has been widely used to define so called FLP answer sets for different types of logic programs. However, it was recently observed that when being extended from normal to more general classes of logic programs, this approach may produce answer sets with circular justifications that are caused by self-supporting loops. The main reason for this behavior is that the FLP answer set semantics is not fully constructive by a bottom up construction of answer sets. In this paper, we overcome this problem by enhancing the FLP answer set semantics with a level mapping formalism such that every answer set I can be built by fixpoint iteration of a one-step provability operator (more precisely, an extended van Emden-Kowalski operator for the FLP reduct fΠI). This is inspired by the fact that under the standard answer set semantics, each answer set I of a normal logic program Π is obtainable by fixpoint iteration of the standard van Emden-Kowalski one-step provability operator for the Gelfond-Lifschitz reduct ΠI, which induces a level mapping. The enhanced FLP answer sets, which we call well-justified FLP answer sets, are thanks to the level mapping free of circular justifications. As a general framework, the well-justified FLP answer set semantics applies to logic programs with first-order formulas, logic programs with aggregates, description logic programs, hex-programs etc., provided that the rule satisfaction is properly extended to such general logic programs. We study in depth the computational complexity of FLP and well-justified FLP answer sets for general classes of logic programs. Our results show that the level mapping does not increase the worst-case complexity of FLP answer sets. Furthermore, we describe an implementation of the well-justified FLP answer set semantics, and report about an experimental evaluation, which indicates a potential for performance improvements by the level mapping in practice.", "This paper relates two extensively studied formalisms: abstract dialectical frameworks and logic programs with generalized atoms or similar constructs. While the syntactic similarity is easy to see, also a strong relation between various stable model semantics proposed for these formalisms is shown by means of a unifying framework in which these semantics are restated in terms of program reducts and an immediate consequence operator, where program reducts have only minimal differences. This approach has advantages for both formalisms, as for example implemented systems for one formalism are usable for the other, and properties such as computational complexity do not have to be rediscovered. As a first, concrete result of this kind, one stable model semantics based on program reducts and subset-minimality that reached a reasonable consensus for logic programs with generalized atoms provides a novel, alternative semantics for abstract dialectical frameworks.", "In recent years, Answer Set Programming ASP, logic programming under the stable model or answer set semantics, has seen several extensions by generalizing the notion of an atom in these programs: be it aggregate atoms, HEX atoms, generalized quantifiers, or abstract constraints, the idea is to have more complicated satisfaction patterns in the lattice of Herbrand interpretations than traditional, simple atoms. In this paper we refer to any of these constructs as generalized atoms. It is known that programs with generalized atoms that have monotonic or antimonotonic satisfaction patterns do not increase complexity with respect to programs with simple atoms if satisfaction of the generalized atoms can be decided in polynomial time under most semantics. It is also known that generalized atoms that are nonmonotonic being neither monotonic nor antimonotonic can, but need not, increase the complexity by one level in the polynomial hierarchy if non-disjunctive programs under the FLP semantics are considered. In this paper we provide the precise boundary of this complexity gap: programs with convex generalized atom never increase complexity, while allowing a single non-convex generalized atom under reasonable conditions always does. We also discuss several implications of this result in practice." ] }
1507.03922
913375748
Gelfond and Zhang recently proposed a new stable model semantics based on Vicious Circle Principle in order to improve the interpretation of logic programs with aggregates. The paper focuses on this proposal, and analyzes the complexity of both coherence testing and cautious reasoning under the new semantics. Some surprising results highlight similarities and differences versus mainstream stable model semantics for aggregates. Moreover, the paper reports on the design of compilation techniques for implementing the new semantics on top of existing ASP solvers, which eventually lead to realize a prototype system that allows for experimenting with Gelfond-Zhang's aggregates. To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 2015.
A detailed complexity analysis for F-stable models is reported in @cite_6 and summarized in Table . Complexity of reasoning under stable models by @cite_3 @cite_9 , instead, is analyzed in @cite_14 , where in particular @math -completeness of coherence testing is proved for disjunction-free programs with aggregates. Concerning G-stable models, the general case was studied in @cite_8 , and a more detailed analysis is provided by this paper. In particular, for disjunction-free programs, the main reasoning tasks are in the first level of the polynomial hierarchy in general when G-stable models are used. On the other hand, coherence testing jumps from @math to @math when F-stable models are replaced by G-stable models in programs with monotone aggregates only. Indeed, in constrast to previous semantics, monotone aggregates are enough to simulate integrity constraints and negation when G-stable models are used.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_9", "@cite_3", "@cite_6" ], "mid": [ "342706626", "2059245949", "2069156007", "", "2064864283" ], "abstract": [ "Aggregates are functions that take sets as arguments. Examples are the function that maps a set to the number of its elements or the function which maps a set to its minimal element. Aggregates are frequently used in relational databases and have many applications in combinatorial search problems and knowledge representation. Aggregates are of particular importance for several extensions of logic programming which are used for declarative programming like Answer Set Programming, Abductive Logic Programming, and the logic of inductive definitions (ID-Logic). Aggregate atoms not only allow a broader class of problems to be represented in a natural way but also allow a more compact representation of problems which often leads to faster solving times. Extensions of specific semantics of logic programs with, in many cases, specific aggregate relations have been proposed before. The main contributions of this thesis are: (i) we extend all major semantics of logic programs: the least model semantics of definite logic programs, the standard model semantics of stratified programs, the Clark completion semantics, the well-founded semantics, the stable models semantics, and the three-valued stable semantics; (ii) our framework admits arbitrary aggregate relations in the bodies of rules. We follow a denotational approach in which a semantics is defined as a (set of) fixpoint(s) of an operator associated with a program. The main tool of this work is Approximation Theory. This is an algebraic theory which defines different types of fixpoints of an approximating operator associated with a logic program. All major semantics of a logic program correspond to specific types of fixpoints of an approximating operator introduced by Fitting. We study different approximating operators for aggregate programs and investigate the precision and complexity of the semantics generated by them. We study in detail one specific operator which extends the Fitting operator and whose semantics extends the three-valued stable semantics of logic programs without aggregates. We look at algorithms, complexity, transformations of aggregate atoms and programs, and an implementation in XSB Prolog.", "The paper presents a knowledge representation language @math which extends ASP with aggregates. The goal is to have a language based on simple syntax and clear intuitive and mathematical semantics. We give some properties of @math , an algorithm for computing its answer sets, and comparison with other approaches.", "This technical note describes a monotone and continuous fixpoint operator to compute the answer sets of programs with aggregates. The fixpoint operator relies on the notion of aggregate solution. Under certain conditions, this operator behaves identically to the three-valued immediate consequence operator PaggrP for aggregate programs, independently proposed in Pelov (2004) and (2004). This operator allows us to closely tie the computational complexity of the answer set checking and answer sets existence problems to the cost of checking a solution of the aggregates in the program. Finally, we relate the semantics described by the operator to other proposals for logic programming with aggregates.", "", "The addition of aggregates has been one of the most relevant enhancements to the language of answer set programming (ASP). They strengthen the modelling power of ASP in terms of natural and concise problem representations. Previous semantic definitions typically agree in the case of non-recursive aggregates, but the picture is less clear for aggregates involved in recursion. Some proposals explicitly avoid recursive aggregates, most others differ, and many of them do not satisfy desirable criteria, such as minimality or coincidence with answer sets in the aggregate-free case. In this paper we define a semantics for programs with arbitrary aggregates (including monotone, antimonotone, and nonmonotone aggregates) in the full ASP language allowing also for disjunction in the head (disjunctive logic programming - DLP). This semantics is a genuine generalization of the answer set semantics for DLP, it is defined by a natural variant of the Gelfond-Lifschitz transformation, and treats aggregate and non-aggregate literals in a uniform way. This novel transformation is interesting per se also in the aggregate-free case, since it is simpler than the original transformation and does not need to differentiate between positive and negative literals. We prove that our semantics guarantees the minimality (and therefore the incomparability) of answer sets, and we demonstrate that it coincides with the standard answer set semantics on aggregate-free programs. Moreover, we carry out an in-depth study of the computational complexity of the language. The analysis pays particular attention to the impact of syntactical restrictions on programs in the form of limited use of aggregates, disjunction, and negation. While the addition of aggregates does not affect the complexity of the full DLP language, it turns out that their presence does increase the complexity of normal (i.e., non-disjunctive) ASP programs up to the second level of the polynomial hierarchy. However, we show that there are large classes of aggregates the addition of which does not cause any complexity gap even for normal programs, including the fragment allowing for arbitrary monotone, arbitrary antimonotone, and stratified (i.e., non-recursive) nonmonotone aggregates. The analysis provides some useful indications on the possibility to implement aggregates in existing reasoning engines." ] }
1507.03922
913375748
Gelfond and Zhang recently proposed a new stable model semantics based on Vicious Circle Principle in order to improve the interpretation of logic programs with aggregates. The paper focuses on this proposal, and analyzes the complexity of both coherence testing and cautious reasoning under the new semantics. Some surprising results highlight similarities and differences versus mainstream stable model semantics for aggregates. Moreover, the paper reports on the design of compilation techniques for implementing the new semantics on top of existing ASP solvers, which eventually lead to realize a prototype system that allows for experimenting with Gelfond-Zhang's aggregates. To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 2015.
Techniques to rewrite logic programs with aggregates into equivalent aggregate-free programs were also investigated in the literature. For example, a rewriting into aggregate-free programs is presented by @cite_10 for F-stable model semantics. However, it must be noted that the rewriting of @cite_10 produces nested expressions in general, and current mainstream ASP systems cannot process directly such constructs, but instead require additional translations such as those by @cite_23 . Other relevant rewriting techniques were proposed in @cite_12 @cite_25 , also proved to be quite efficient in practice. However, these rewritings preserve F-stable models only in the stratified case, or if recursion is limited to convex aggregates.
{ "cite_N": [ "@cite_10", "@cite_12", "@cite_23", "@cite_25" ], "mid": [ "2029702617", "74208780", "1488326671", "" ], "abstract": [ "Answer set programming (ASP) is a logic programming paradigm that can be used to solve complex combinatorial search problems. Aggregates are an ASP construct that plays an important role in many applications. Defining a satisfactory semantics of aggregates turned out to be a difficult problem, and in this article we propose a new approach, based on an analogy between aggregates and propositional connectives. First we extend the definition of an answer set stable model to cover arbitrary propositional theories; then we define aggregates on top of them both as primitive constructs and as abbreviations for formulas. Our definition of an aggregate combines expressiveness and simplicity, and it inherits many theorems about programs with nested expressions, such as theorems about strong equivalence and splitting.", "Answer-set programs become more expressive if extended by cardinality rules. Certain implementation techniques, however, presume the translation of such rules back into normal rules. This has been previously realized using a BDD-based transformation which may produce a quadratic number of rules in the worst case. In this paper, we present two further constructions which are based on Boolean circuits for merging and sorting and which have been considered, e.g., in the context of the propositional satisfiability SAT problem and its extensions. Such circuits can be used to express cardinality constraints in a more compact way. Thus, in order to normalize cardinality rules, we first develop an ASP encoding of a sorting circuit, on top of which the second translation, one encoding a selection circuit, is devised. Because sorting is more general than cardinality checking, we also present ways to prune the resulting sorting and selection programs. The experimental part illustrates the compactness of the new normalizations and points out cases where computational performance is improved.", "We present an implementation of the general language of stable models proposed by Ferraris, Lee and Lifschitz. Under certain conditions, system f2lp turns a first-order theory under the stable model semantics into an answer set program, so that existing answer set solvers can be used for computing the general language. Quantifiers are first eliminated and then the resulting quantifier-free formulas are turned into rules. Based on the relationship between stable models and circumscription, f2lp can also serve as a reasoning engine for general circumscriptive theories. We illustrate how to use f2lp to compute the circumscriptive event calculus.", "" ] }
1507.03922
913375748
Gelfond and Zhang recently proposed a new stable model semantics based on Vicious Circle Principle in order to improve the interpretation of logic programs with aggregates. The paper focuses on this proposal, and analyzes the complexity of both coherence testing and cautious reasoning under the new semantics. Some surprising results highlight similarities and differences versus mainstream stable model semantics for aggregates. Moreover, the paper reports on the design of compilation techniques for implementing the new semantics on top of existing ASP solvers, which eventually lead to realize a prototype system that allows for experimenting with Gelfond-Zhang's aggregates. To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of ICLP 2015.
Aggregate functions are also semantically similar to DL @cite_15 and HEX atoms @cite_2 , extensions of ASP for interacting with external knowledge bases, possibly expressed in different languages.
{ "cite_N": [ "@cite_15", "@cite_2" ], "mid": [ "2100983017", "2153361264" ], "abstract": [ "We propose a combination of logic programming under the answer set semantics with the description logics SHIF(D) and SHOIN(D), which underly the Web ontology languages OWL Lite and OWL DL, respectively. To this end, we introduce description logic programs (or dl-programs), which consist of a description logic knowledge base L and a finite set P of description logic rules (or dl-rules). Such rules are similar to usual rules in nonmonotonic logic programs, but they may also contain queries to L, possibly under default negation, in their bodies. They allow for building rules on top of ontologies but also, to a limited extent, building ontologies on top of rules. We define a suite of semantics for various classes of dl-programs, which conservatively extend the standard semantics of the respective classes and coincide with it in absence of a description logic knowledge base. More concretely, we generalize positive, stratified, and arbitrary normal logic programs to dl-programs, and define a Herbrand model semantics for them. We show that they have similar properties as ordinary logic programs, and also provide fixpoint characterizations in terms of (iterated) consequence operators. For arbitrary dl-programs, we define answer sets by generalizing Gelfond and Lifschitz's notion of a transform, leading to a strong and a weak answer set semantics, which are based on reductions to the semantics of positive dl-programs and ordinary positive logic programs, respectively. We also show how the weak answer sets can be computed utilizing answer sets of ordinary normal logic programs. Furthermore, we show how some advanced reasoning tasks for the Semantic Web, including different forms of closed-world reasoning and default reasoning, as well as DL-safe rules, can be realized on top of dl-programs. Finally, we give a precise picture of the computational complexity of dl-programs, and we describe efficient algorithms and a prototype implementation of dl-programs which is available on the Web.", "HEX-programs extend logic programs under the answer set semantics with external computations through external atoms. As reasoning from ground Horn programs with nonmonotonic external atoms of polynomial complexity is already on the second level of the polynomial hierarchy, minimality checking of answer set candidates needs special attention. To this end, we present an approach based on unfounded sets as a generalization of related techniques for ASP programs. The unfounded set detection is expressed as a propositional SAT problem, for which we provide two different encodings and optimizations to them. We then integrate our approach into a previously developed evaluation framework for HEX-programs, which is enriched by additional learning techniques that aim at avoiding the reconstruction of the same or related unfounded sets. Furthermore, we provide a syntactic criterion that allows one to skip the minimality check in many cases. An experimental evaluation shows that the new approach significantly decreases runtime." ] }
1507.04308
2159201125
Modularity is widely used to effectively measure the strength of the community structure found by community detection algorithms. However, modularity maximization suffers from two opposite yet coexisting problems: in some cases, it tends to favor small communities over large ones while in others, large communities over small ones. The latter tendency is known in the literature as the resolution limit problem. To address them, we propose to modify modularity by subtracting from it the fraction of edges connecting nodes of different communities and by including community density into modularity. We refer to the modified metric as Modularity Density and we demonstrate that it indeed resolves both problems mentioned above. We describe the motivation for introducing this metric by using intuitively clear and simple examples. We also prove that this new metric solves the resolution limit problem. Finally, we discuss the results of applying this metric, modularity, and several other popular community quality metrics to two real dynamic networks. The results imply that Modularity Density is consistent with all the community quality measurements but not modularity, which suggests that Modularity Density is an improved measurement of the community quality compared to modularity.
Community detection in complex networks has received a considerable amount of attention in the last years. Numerous techniques have been developed for both efficient and effective community detection, including Modularity Optimization @cite_26 @cite_8 @cite_35 @cite_3 @cite_22 @cite_11 @cite_25 , Clique Percolation @cite_6 @cite_13 , Local Expansion @cite_28 @cite_5 @cite_0 , Fuzzy Clustering @cite_32 @cite_17 , Link Partitioning @cite_20 , and Label Propagation @cite_19 @cite_9 @cite_21 . The above algorithms are designed to detect communities on static networks. However, networks, such as Internet and online social networks, are usually dynamic, with changes arriving as a stream. Thus, a large number of algorithms were proposed to cope with community detection on dynamically evolving networks, such as LabelRankT @cite_24 and Estrangement @cite_2 . LabelRankT @cite_24 detects communities in large-scale dynamic networks through stabilized label propagation. Estrangement @cite_2 detects temporal communities by maximizing modularity in a snapshot subject to a constraint on the estrangement from the partition in the previous snapshot.
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_3", "@cite_2", "@cite_5", "@cite_20", "@cite_8", "@cite_21", "@cite_17", "@cite_26", "@cite_28", "@cite_32", "@cite_6", "@cite_19", "@cite_25", "@cite_9", "@cite_0", "@cite_24", "@cite_13", "@cite_11" ], "mid": [ "2047940964", "2171608649", "2131681506", "2164890544", "2118608338", "2110620844", "2151936673", "1774824195", "2050601254", "2089458547", "1518514500", "2044719661", "2164928285", "2132202037", "2017987256", "2118059508", "2091202730", "2157797752", "2146659008", "1985625141" ], "abstract": [ "The discovery and analysis of community structure in networks is a topic of considerable recent interest within the physics community, but most methods proposed so far are unsuitable for very large networks because of their computational cost. Here we present a hierarchical agglomeration algorithm for detecting community structure which is faster than many competing algorithms: its running time on a network with n vertices and m edges is O(m d log n) where d is the depth of the dendrogram describing the community structure. Many real-world networks are sparse and hierarchical, with m n and d log n, in which case our algorithm runs in essentially linear time, O(n log^2 n). As an example of the application of this algorithm we use it to analyze a network of items for sale on the web-site of a large online retailer, items in the network being linked if they are frequently purchased by the same buyer. The network has more than 400,000 vertices and 2 million edges. We show that our algorithm can extract meaningful communities from this network, revealing large-scale patterns present in the purchasing habits of customers.", "We formulate a spectral graph-partitioning algorithm that uses the two leading eigenvectors of the matrix corresponding to a selected quality function to split a network into three communities in a single step. In so doing, we extend the recursive bipartitioning methods developed by Newman [M. E. J. Newman, Proc. Natl. Acad. Sci. U.S.A. 103, 8577 (2006); Phys. Rev. E 74, 036104 (2006)] to allow one to consider the best available two-way and three-way divisions at each recursive step. We illustrate the method using simple \"bucket brigade\" examples and then apply the algorithm to examine the community structures of the coauthorship graph of network scientists and of U. S. Congressional networks inferred from roll call voting similarities.", "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks.", "Temporal communities are the result of a consistent partitioning of nodes across multiple snapshots of an evolving network, and they provide insights into how dense clusters in a network emerge, combine, split and decay over time. To reliably detect temporal communities we need to not only find a good community partition in a given snapshot but also ensure that it bears some similarity to the partition(s) found in the previous snapshot(s), a particularly difficult task given the extreme sensitivity of community structure yielded by current methods to changes in the network structure. Here, motivated by the inertia of inter-node relationships, we present a new measure of partition distance called estrangement, and show that constraining estrangement enables one to find meaningful temporal communities at various degrees of temporal smoothness in diverse real-world datasets. Estrangement confinement thus provides a principled approach to uncovering temporal communities in evolving networks.", "Many networks in nature, society and technology are characterized by a mesoscopic level of organization, with groups of nodes forming tightly connected units, called communities or modules, that are only weakly linked to each other. Uncovering this community structure is one of the most important problems in the field of complex networks. Networks often show a hierarchical organization, with communities embedded within other communities; moreover, nodes can be shared between different communities. Here, we present the first algorithm that finds both overlapping communities and the hierarchical structure. The method is based on the local optimization of a fitness function. Community structure is revealed by peaks in the fitness histogram. The resolution can be tuned by a parameter enabling different hierarchical levels of organization to be investigated. Tests on real and artificial networks give excellent results.", "Network theory has become pervasive in all sectors of biology, from biochemical signalling to human societies, but identification of relevant functional communities has been impaired by many nodes belonging to several overlapping groups at once, and by hierarchical structures. These authors offer a radically different viewpoint, focusing on links rather than nodes, which allows them to demonstrate that overlapping communities and network hierarchies are two faces of the same issue.", "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as “modularity” over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets.", "Membership diversity is a characteristic aspect of social networks in which a person may belong to more than one social group. For this reason, discovering overlapping structures is necessary for realistic social analysis. In this paper, we present a fast algorithm, called SLPA, for overlapping community detection in large-scale networks. SLPA spreads labels according to dynamic interaction rules. It can be applied to both unipartite and bipartite networks. It is also able to uncover overlapping nested hierarchy . The time complexity of SLPA scales linearly with the number of edges in the network. Experiments in both synthetic and real-world networks show that SLPA has an excellent performance in identifying both node and community level overlapping structures.", "Identifying overlapping communities in networks is a challenging task. In this work we present a probabilistic approach to community detection that utilizes a Bayesian non-negative matrix factorization model to extract overlapping modules from a network. The scheme has the advantage of soft-partitioning solutions, assignment of node participation scores to modules, and an intuitive foundation. We present the performance of the method against a variety of benchmark problems and compare and contrast it to several other algorithms for community detection.", "Many networks display community structure---groups of vertices within which connections are dense but between which they are sparser---and sensitive computer algorithms have in recent years been developed for detecting this structure. These algorithms, however, are computationally demanding, which limits their application to small networks. Here we describe an algorithm which gives excellent results when tested on both computer-generated and real-world networks and is much faster, typically thousands of times faster, than previous algorithms. We give several example applications, including one to a collaboration network of more than 50 000 physicists.", "In this paper, we present an efficient algorithm for finding overlapping communities in social networks. Our algorithm does not rely on the contents of the messages and uses the communication graph only. The knowledge of the structure of the communities is important for the analysis of social behavior and evolution of the society as a whole, as well as its individual members. This knowledge can be helpful in discovering groups of actors that hide their communications, possibly for malicious reasons. Although the idea of using communication graphs for identifying clusters of actors is not new, most of the traditional approaches, with the exception of the work by , produce disjoint clusters of actors, de facto postulating that an actor is allowed to belong to at most one cluster. Our algorithm is significantly more efficient than the previous algorithm by ; it also produces clusters of a comparable or better quality.", "Identification of (overlapping) communities clusters in a complex network is a general problem in data mining of network data sets. In this paper, we devise a novel algorithm to identify overlapping communities in complex networks by the combination of a new modularity function based on generalizing NG's Q function, an approximation mapping of network nodes into Euclidean space and fuzzy c-means clustering. Experimental results indicate that the new algorithm is efficient at detecting both good clusterings and the appropriate number of clusters.", "A network is a network — be it between words (those associated with ‘bright’ in this case) or protein structures. Many complex systems in nature and society can be described in terms of networks capturing the intricate web of connections among the units they are made of1,2,3,4. A key question is how to interpret the global organization of such networks as the coexistence of their structural subunits (communities) associated with more highly interconnected parts. Identifying these a priori unknown building blocks (such as functionally related proteins5,6, industrial sectors7 and groups of people8,9) is crucial to the understanding of the structural and functional properties of networks. The existing deterministic methods used for large networks find separated communities, whereas most of the actual networks are made of highly overlapping cohesive groups of nodes. Here we introduce an approach to analysing the main statistical features of the interwoven sets of overlapping communities that makes a step towards uncovering the modular structure of complex systems. After defining a set of new characteristic quantities for the statistics of communities, we apply an efficient technique for exploring overlapping communities on a large scale. We find that overlaps are significant, and the distributions we introduce reveal universal features of networks. Our studies of collaboration, word-association and protein interaction graphs show that the web of communities has non-trivial correlations and specific scaling properties.", "Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.", "High-throughput techniques are leading to an explosive growth in the size of biological databases and creating the opportunity to revolutionize our understanding of life and disease. Interpretation of these data remains, however, a major scientific challenge. Here, we propose a methodology that enables us to extract and display information contained in complex networks1,2,3. Specifically, we demonstrate that we can find functional modules4,5 in complex networks, and classify nodes into universal roles according to their pattern of intra- and inter-module connections. The method thus yields a ‘cartographic representation’ of complex networks. Metabolic networks6,7,8 are among the most challenging biological networks and, arguably, the ones with most potential for immediate applicability9. We use our method to analyse the metabolic networks of twelve organisms from three different superkingdoms. We find that, typically, 80 of the nodes are only connected to other nodes within their respective modules, and that nodes with different roles are affected by different evolutionary constraints and pressures. Remarkably, we find that metabolites that participate in only a few reactions but that connect different modules are more conserved than hubs whose links are mostly within a single module.", "Studies of community structure and evolution in large social networks require a fast and accurate algorithm for community detection. As the size of analyzed communities grows, complexity of the community detection algorithm needs to be kept close to linear. The Label Propagation Algorithm (LPA) has the benefits of nearly-linear running time and easy implementation, thus it forms a good basis for efficient community detection methods. In this paper, we propose new update rule and label propagation criterion in LPA to improve both its computational efficiency and the quality of communities that it detects. The speed is optimized by avoiding unnecessary updates performed by the original algorithm. This change reduces significantly (by order of magnitude for large networks) the number of iterations that the algorithm executes. We also evaluate our generalization of the LPA update rule that takes into account, with varying strength, connections to the neighborhood of a node considering a new label. Experiments on computer generated networks and a wide range of social networks show that our new rule improves the quality of the detected communities compared to those found by the original LPA. The benefit of considering positive neighborhood strength is pronounced especially on real-world networks containing sufficiently large fraction of nodes with high clustering coefficient.", "Clustering and community structure is crucial for many network systems and the related dynamic processes. It has been shown that communities are usually overlapping and hierarchical. However, previous methods investigate these two properties of community structure separately. This paper proposes an algorithm (EAGLE) to detect both the overlapping and hierarchical properties of complex community structure together. This algorithm deals with the set of maximal cliques and adopts an agglomerative framework. The quality function of modularity is extended to evaluate the goodness of a cover. The examples of application to real world networks give excellent results.", "An increasingly important challenge in network analysis is efficient detection and tracking of communities in dynamic networks for which changes arrive as a stream. There is a need for algorithms that can incrementally update and monitor communities whose evolution generates huge real-time data streams, such as the Internet or on-line social networks. In this paper, we propose LabelRankT, an on-line distributed algorithm for detection of communities in large-scale dynamic networks through stabilized label propagation. Results of tests on real-world networks demonstrate that LabelRankT has much lower computational costs than other algorithms. It also improves the quality of the detected communities compared to dynamic detection methods and matches the quality achieved by static detection approaches. Unlike most of other algorithms which apply only to binary networks, LabelRankT works on weighted and directed networks, which provides a flexible and promising solution for real-world applications.", "The inclusion of link weights into the analysis of network properties allows a deeper insight into the (often overlapping) modular structure of real- world webs. We introduce a clustering algorithm clique percolation method with weights (CPMw) for weighted networks based on the concept of percolating k-cliques with high enough intensity. The algorithm allows overlaps between the modules. First, we give detailed analytical and numerical results about the critical point of weighted k-clique percolation on (weighted) Erdý os-Renyi graphs. Then, for a scientist collaboration web and a stock correlation graph we compute three-link weight correlations and with the CPMw the weighted modules. After reshuffling link weights in both networks and computing the same quantities for the randomized control graphs as well, we show that groups of three or more strong links prefer to cluster together in both original graphs.", "The description of the structure of complex networks has been one of the focus of attention of the physicist’s community in the recent years. The levels of description range from the microscopic (degree, clustering coefficient, centrality measures, etc., of individual nodes) to the macroscopic description in terms of statistical properties of the whole network (degree distribution, total clustering coefficient, degree-degree correlations, etc.) [1, 2, 3, 4]. Between these two extremes there is a ”mesoscopic” description of networks that tries to explain its community structure. The general notion of community structure in complex networks was first pointed out in the physics literature by Girvan and Newman [5], and refers to the fact that nodes in many real networks appear to group in subgraphs in which the density of internal connections is larger than the connections with the rest of nodes in the network. The community structure has been empirically found in many real technological, biological and social networks [6, 7, 8, 9, 10] and its emergence seems to be at the heart of the network formation process [11]. The existing methods intended to devise the community structure in complex networks have been recently reviewed in [10]. All these methods require a definition of community that imposes the limit up to which a group should be considered a community. However, the concept of community itself is qualitative: nodes must be more connected within its community than with the rest of the network, and its quantification is still a subject of debate. Some quantitative definitions that came from sociology have been used in recent studies [12], but in general, the physics community has widely accepted a recent measure for the community structure based on the concept of modularity Q introduced by Newman and Girvan [13]:" ] }
1507.04308
2159201125
Modularity is widely used to effectively measure the strength of the community structure found by community detection algorithms. However, modularity maximization suffers from two opposite yet coexisting problems: in some cases, it tends to favor small communities over large ones while in others, large communities over small ones. The latter tendency is known in the literature as the resolution limit problem. To address them, we propose to modify modularity by subtracting from it the fraction of edges connecting nodes of different communities and by including community density into modularity. We refer to the modified metric as Modularity Density and we demonstrate that it indeed resolves both problems mentioned above. We describe the motivation for introducing this metric by using intuitively clear and simple examples. We also prove that this new metric solves the resolution limit problem. Finally, we discuss the results of applying this metric, modularity, and several other popular community quality metrics to two real dynamic networks. The results imply that Modularity Density is consistent with all the community quality measurements but not modularity, which suggests that Modularity Density is an improved measurement of the community quality compared to modularity.
In addition to the development of algorithms for community detection, several metrics for evaluating the quality of community structure have been introduced. The most popular and widely used is modularity @cite_31 @cite_8 . It is defined as the difference (relative to the total number of edges) between the actual and expected (in a randomized graph with the same number of nodes and the same degree sequence) number of edges inside a given community. Although initially defined for unweighted and undirected networks, the definition of modularity has been subsequently extended to capture community structure in weighted networks @cite_27 and then in directed networks @cite_4 .
{ "cite_N": [ "@cite_27", "@cite_31", "@cite_4", "@cite_8" ], "mid": [ "1983345514", "2095293504", "2063251739", "2151936673" ], "abstract": [ "The connections in many networks are not merely binary entities, either present or not, but have associated weights that record their strengths relative to one another. Recent studies of networks have, by and large, steered clear of such weighted networks, which are often perceived as being harder to analyze than their unweighted counterparts. Here we point out that weighted networks can in many cases be analyzed using a simple mapping from a weighted network to an unweighted multigraph, allowing us to apply standard techniques for unweighted graphs to weighted ones as well. We give a number of examples of the method, including an algorithm for detecting community structure in weighted networks and a simple proof of the maximum-flow--minimum-cut theorem.", "We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible \"betweenness\" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems.", "We consider the problem of finding communities or modules in directed networks. In the past, the most common approach to this problem has been to ignore edge direction and apply methods developed for community discovery in undirected networks, but this approach discards potentially useful information contained in the edge directions. Here we show how the widely used community finding technique of modularity maximization can be generalized in a principled fashion to incorporate information contained in edge directions. We describe an explicit algorithm based on spectral optimization of the modularity and show that it gives demonstrably better results than previous methods on a variety of test networks, both real and computer generated.", "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as “modularity” over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets." ] }
1507.03540
2952761846
Let @math be a conformal block, with @math consecutive channels @math , @math , in the conformal field theory @math , where @math is a @math minimal model, generated by chiral fields of spin @math , and labeled by two co-prime integers @math and @math , @math , while @math is a free boson conformal field theory. @math is the expectation value of vertex operators between an initial and a final state. Each vertex operator is labelled by a charge vector that lives in the weight lattice of the Lie algebra @math , spanned by weight vectors @math . We restrict our attention to conformal blocks with vertex operators whose charge vectors point along @math . The charge vectors that label the initial and final states can point in any direction. Following the @math AGT correspondence, and using Nekrasov's instanton partition functions without modification, to compute @math , leads to ill-defined expressions. We show that restricting the states that flow in the channels @math , @math , to states labeled by @math partitions that satisfy conditions that we call @math -Burge partitions, leads to well-defined expressions that we identify with @math . We check our identification by showing that a specific non-trivial conformal block that we compute, using the @math -Burge conditions satisfies the expected differential equation.
1. In @cite_48 , Santachiara and Tanzini apply AGT to compute conformal blocks of @math and @math vertex operators in Virasoro minimal models. The ill-defined expressions were circumvented using an analytic continuation scheme that was tested to low orders in the combinatorial expansion of the instanton partition functions.
{ "cite_N": [ "@cite_48" ], "mid": [ "2041203465" ], "abstract": [ "We identify Moore-Read wave functions, describing non-Abelian statistics in fractional quantum Hall systems, with the instanton partition of N = 2 superconformal quiver gauge theories at suitable values of masses and Ω-background parameters. This is obtained by extending to rational conformal field theories the SU(2) gauge quiver Liouville field theory duality recently found by Alday-Gaiotto-Tachikawa. A direct link between the Moore-Read Hall n-body wave functions and ℤ n -equivariant Donaldson polynomials is pointed out." ] }
1507.03540
2952761846
Let @math be a conformal block, with @math consecutive channels @math , @math , in the conformal field theory @math , where @math is a @math minimal model, generated by chiral fields of spin @math , and labeled by two co-prime integers @math and @math , @math , while @math is a free boson conformal field theory. @math is the expectation value of vertex operators between an initial and a final state. Each vertex operator is labelled by a charge vector that lives in the weight lattice of the Lie algebra @math , spanned by weight vectors @math . We restrict our attention to conformal blocks with vertex operators whose charge vectors point along @math . The charge vectors that label the initial and final states can point in any direction. Following the @math AGT correspondence, and using Nekrasov's instanton partition functions without modification, to compute @math , leads to ill-defined expressions. We show that restricting the states that flow in the channels @math , @math , to states labeled by @math partitions that satisfy conditions that we call @math -Burge partitions, leads to well-defined expressions that we identify with @math . We check our identification by showing that a specific non-trivial conformal block that we compute, using the @math -Burge conditions satisfies the expected differential equation.
If one can extend the analytic continuation scheme used in @cite_48 to the full instanton partition functions of the most general conformal blocks, and obtain the same result as in the present work, then this would amount to a proof that the proposed modified AGT expression for @math in equation ) is indeed the required minimal model conformal block up to a Heisenberg factor.
{ "cite_N": [ "@cite_48" ], "mid": [ "2041203465" ], "abstract": [ "We identify Moore-Read wave functions, describing non-Abelian statistics in fractional quantum Hall systems, with the instanton partition of N = 2 superconformal quiver gauge theories at suitable values of masses and Ω-background parameters. This is obtained by extending to rational conformal field theories the SU(2) gauge quiver Liouville field theory duality recently found by Alday-Gaiotto-Tachikawa. A direct link between the Moore-Read Hall n-body wave functions and ℤ n -equivariant Donaldson polynomials is pointed out." ] }
1507.03540
2952761846
Let @math be a conformal block, with @math consecutive channels @math , @math , in the conformal field theory @math , where @math is a @math minimal model, generated by chiral fields of spin @math , and labeled by two co-prime integers @math and @math , @math , while @math is a free boson conformal field theory. @math is the expectation value of vertex operators between an initial and a final state. Each vertex operator is labelled by a charge vector that lives in the weight lattice of the Lie algebra @math , spanned by weight vectors @math . We restrict our attention to conformal blocks with vertex operators whose charge vectors point along @math . The charge vectors that label the initial and final states can point in any direction. Following the @math AGT correspondence, and using Nekrasov's instanton partition functions without modification, to compute @math , leads to ill-defined expressions. We show that restricting the states that flow in the channels @math , @math , to states labeled by @math partitions that satisfy conditions that we call @math -Burge partitions, leads to well-defined expressions that we identify with @math . We check our identification by showing that a specific non-trivial conformal block that we compute, using the @math -Burge conditions satisfies the expected differential equation.
2. In @cite_16 , Estienne, Pasquier, Santachiara and Serban study conformal blocks of vertex operators such that @math , and @math , @math , and @math , @math , or @math , @math , @math , and @math , @math . in @math minimal models. From the null-state conditions of these vertex operators, Estienne show that these specific conformal blocks are labeled by @math -partitions that satisfy specific conditions. While the notation used in @cite_16 is different from that in this work, one can check, in simple cases, that their @math -partitions are equivalent to those that appear in this work.
{ "cite_N": [ "@cite_16" ], "mid": [ "2102499404" ], "abstract": [ "We study the properties of the conformal blocks of the conformal eld theories with Virasoro or W-extended symmetry. When the conformal blocks contain only second-order degenerate elds, the conformal blocks obey second order dierential equations and they can be interpreted as ground-state wave functions of a trigonometric Calogero-Sutherland Hamiltonian with nontrivial braiding properties. A generalized duality property relates the two types of second order degenerate elds. By studying this duality we found that the excited states of the CalogeroSutherland Hamiltonian are characterized by two partitions, or in the case of WAk 1 theories by k partitions. By extending the conformal eld theories under consideration by a u(1) eld, we nd that we can put in correspondence the states in the Hilbert state of the extended CFT with the excited non-polynomial eigenstates of the Calogero-Sutherland Hamiltonian. When the action of the Calogero-Sutherland integrals of motion is translated on the Hilbert space, they become identical to the integrals of motion recently discovered by Alba, Fateev, Litvinov and Tarnopolsky in Liouville theory in the context of the AGT conjecture. Upon bosonisation, these integrals of motion can be expressed as a sum of two, or in generalk, bosonic Calogero-Sutherland Hamiltonian coupled by an interaction term with a triangular structure. For special values of the coupling constant, the conformal blocks can be expressed in terms of Jack polynomials with pairing properties, and they give electron wave functions for special Fractional Quantum Hall states." ] }
1507.03540
2952761846
Let @math be a conformal block, with @math consecutive channels @math , @math , in the conformal field theory @math , where @math is a @math minimal model, generated by chiral fields of spin @math , and labeled by two co-prime integers @math and @math , @math , while @math is a free boson conformal field theory. @math is the expectation value of vertex operators between an initial and a final state. Each vertex operator is labelled by a charge vector that lives in the weight lattice of the Lie algebra @math , spanned by weight vectors @math . We restrict our attention to conformal blocks with vertex operators whose charge vectors point along @math . The charge vectors that label the initial and final states can point in any direction. Following the @math AGT correspondence, and using Nekrasov's instanton partition functions without modification, to compute @math , leads to ill-defined expressions. We show that restricting the states that flow in the channels @math , @math , to states labeled by @math partitions that satisfy conditions that we call @math -Burge partitions, leads to well-defined expressions that we identify with @math . We check our identification by showing that a specific non-trivial conformal block that we compute, using the @math -Burge conditions satisfies the expected differential equation.
3. In @cite_2 , Fucito, Morales and Poghossian show that @math supersymmetric Yang-Mills gauge theories on the squashed @math , with rational deformation parameters, are dual to Virasoro minimal models. Ill-defined expressions are handled using a deformation scheme, akin to that used in @cite_48 , and rested to low orders in the combinatorial expansion of the instanton partition functions.
{ "cite_N": [ "@cite_48", "@cite_2" ], "mid": [ "2041203465", "1009009209" ], "abstract": [ "We identify Moore-Read wave functions, describing non-Abelian statistics in fractional quantum Hall systems, with the instanton partition of N = 2 superconformal quiver gauge theories at suitable values of masses and Ω-background parameters. This is obtained by extending to rational conformal field theories the SU(2) gauge quiver Liouville field theory duality recently found by Alday-Gaiotto-Tachikawa. A direct link between the Moore-Read Hall n-body wave functions and ℤ n -equivariant Donaldson polynomials is pointed out.", "Abstract After a very brief recollection of how my scientific collaboration with Ugo started, in this talk I will present some recent results obtained with localization: the deformed gauge theory partition function Z ( τ → | q ) and the expectation value of circular Wilson loops W on a squashed four-sphere will be computed. The partition function is deformed by turning on τ J tr Φ J interactions with Φ the N = 2 superfield. For the N = 4 theory SUSY gauge theory exact formulae for Z and W in terms of an underlying U ( N ) interacting matrix model can be derived thus replacing the free Gaussian model describing the undeformed N = 4 theory. These results will be then compared with those obtained with the dual CFT according to the AGT correspondence. The interactions introduced previously are in fact related to the insertions of commuting integrals of motion in the four-point CFT correlator and the chiral correlators are expressed as τ -derivatives of the gauge theory partition function on a finite Ω -background." ] }
1507.03540
2952761846
Let @math be a conformal block, with @math consecutive channels @math , @math , in the conformal field theory @math , where @math is a @math minimal model, generated by chiral fields of spin @math , and labeled by two co-prime integers @math and @math , @math , while @math is a free boson conformal field theory. @math is the expectation value of vertex operators between an initial and a final state. Each vertex operator is labelled by a charge vector that lives in the weight lattice of the Lie algebra @math , spanned by weight vectors @math . We restrict our attention to conformal blocks with vertex operators whose charge vectors point along @math . The charge vectors that label the initial and final states can point in any direction. Following the @math AGT correspondence, and using Nekrasov's instanton partition functions without modification, to compute @math , leads to ill-defined expressions. We show that restricting the states that flow in the channels @math , @math , to states labeled by @math partitions that satisfy conditions that we call @math -Burge partitions, leads to well-defined expressions that we identify with @math . We check our identification by showing that a specific non-trivial conformal block that we compute, using the @math -Burge conditions satisfies the expected differential equation.
5. In @cite_34 , Fukuda, Nakamura, Matsup and Zhu studied the representation theory of @math , the central extension of the degenerate double affine Hecke algebra @cite_25 @cite_3 in the context of the minimal @math models. They found, among other results, that the states are labelled by @math -partitions that satisfy the @math -Burge conditions discussed in this work.
{ "cite_N": [ "@cite_34", "@cite_25", "@cite_3" ], "mid": [ "2136727539", "2964131193", "" ], "abstract": [ "Recently an orthogonal basis ofWN -algebra (AFLT basis) labeled by N-tuple Young diagrams was found in the context of 4D 2D duality. Recursion relations among the basis are summarized in the form of an algebra SH c which is universal for any N. We show that it has an S3 automorphism which is referred to as triality. We study the level- rank duality between minimal models, which is a special example of the automorphism. It is shown that the nonvanishing states in both systems are described by N or M Young diagrams with the rows of boxes appropriately shued. The reshuing of rows implies there exists partial ordering of the set which labels them. For the simplest example, one can compute the partition functions for the partially ordered set (poset) explicitly, which reproduces the Rogers-Ramanujan identities. We also study the description of minimal models by SH c . Simple analysis reproduces some known properties of minimal models, the structure of singular vectors and the N-Burge condition in the Hilbert space.", "We construct a representation of the affine W-algebra of ( g l _ r ) on the equivariant homology space of the moduli space of U r -instantons, and we identify the corresponding module. As a corollary, we give a proof of a version of the AGT conjecture concerning pure N=2 gauge theory for the group SU(r).", "" ] }
1507.03471
1441091828
A dialog state tracker is an important component in modern spoken dialog systems. We present an incremental dialog state tracker, based on LSTM networks. It directly uses automatic speech recognition hypotheses to track the state. We also present the key non-standard aspects of the model that bring its performance close to the state-of-the-art and experimentally analyze their contribution: including the ASR confidence scores, abstracting scarcely represented values, including transcriptions in the training data, and model averaging.
The only other incremental dialog tracker know to us is used in @cite_13 . In this paper, the authors describe an incremental dialog system for number dictation as a specific instance of their incremental dialog processing framework. To track the dialog state, they use a discourse modeling system which keeps track of confidence scores from semantic parses of the input; these are produced by a grammar-based semantic interpreter with a hand-coded context-free grammar. Unlike our system, it requires handcrafted grammar and an explicit semantic representation of the input. Using RNN for dialog state tracking has been proposed before @cite_16 @cite_20 . The dialog state tracker in @cite_16 uses an RNN, with a very elaborate architecture, to track the dialog state turn-by-turn. Similarly to our model, their model does not need an explicit semantic representation of the input. They also use a similar abstraction of low-occurring values (they call the technique tagged n-gram features''), which should result in better generalization on rare but well-recognized values. We use only 1-best ASR hypothesis and achieve near state-of-the-art results, while the other tracking models from the literature @cite_15 @cite_16 @cite_9 @cite_1 typically use the whole ASR SLU n-best list as an input.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_15", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "2142943184", "2586849665", "2251355666", "", "2045804781", "2108806737" ], "abstract": [ "For robust spoken dialog management, various dialog state tracking methods have been proposed. Although discriminative models are gaining popularity due to their superior performance, generative models based on the Partially Observable Markov Decision Process model still remain attractive since they provide an integrated framework for dialog state tracking and dialog policy optimization. Although a straightforward way to fit a generative model is to independently train the component probability models, we present a gradient descent algorithm that simultaneously train all the component models. We show that the resulting tracker performs competitively with other top-performing trackers that participated in DSTC2.", "We present our work in Dialog State Tracking Challenge 5, the main task of which is to track dialog state on human-human conversations cross language. Firstly a probabilistic enhanced framework is used to represent sub-dialog, which consists of three parts, the input model for extracting features, the enhanced model for updating dialog state and the output model to give the tracking frame. Meanwhile, parallel language systems are proposed to overcome inaccuracy caused by machine translation for cross language testing. We also introduce a new iterative alignment method extended from our work in DSTC4. Furthermore, a slot-based score averaging method is introduced to build an ensemble by combining different trackers. Results of our DSTC5 system show that our method significantly improves tracking performance compared with baseline method.", "In spoken dialog systems, statistical state tracking aims to improve robustness to speech recognition errors by tracking a posterior distribution over hidden dialog states. This paper introduces two novel methods for this task. First, we explain how state tracking is structurally similar to web-style ranking, enabling mature, powerful ranking algorithms to be applied. Second, we show how to use multiple spoken language understanding engines (SLUs) in state tracking — multiple SLUs can expand the set of dialog states being tracked, and give more information about each, thereby increasing both recall and precision of state tracking. We evaluate on the second Dialog State Tracking Challenge; together these two techniques yield highest accuracy in 2 of 3 tasks, including the most difficult and general task.", "", "This paper describes a fully incremental dialogue system that can engage in dialogues in a simple domain, number dictation. Because it uses incremental speech recognition and prosodic analysis, the system can give rapid feedback as the user is speaking, with a very short latency of around 200ms. Because it uses incremental speech synthesis and self-monitoring, the system can react to feedback from the user as the system is speaking. A comparative evaluation shows that naive users preferred this system over a non-incremental version, and that it was perceived as more human-like.", "While belief tracking is known to be important in allowing statistical dialog systems to manage dialogs in a highly robust manner, until recently little attention has been given to analysing the behaviour of belief tracking techniques. The Dialogue State Tracking Challenge has allowed for such an analysis, comparing multiple belief tracking approaches on a shared task. Recent success in using deep learning for speech research motivates the Deep Neural Network approach presented here. The model parameters can be learnt by directly maximising the likelihood of the training data. The paper explores some aspects of the training, and the resulting tracker is found to perform competitively, particularly on a corpus of dialogs from a system not found in the training." ] }
1507.03323
2618737689
This paper proposes and investigates a Boolean gossip model as a simplified but non-trivial probabilistic Boolean network. With positive node interactions, in view of standard theories from Markov chains, we prove that the node states asymptotically converge to an agreement at a binary random variable, whose distribution is characterized for large-scale networks by mean-field approximation. Using combinatorial analysis, we also successfully count the number of communication classes of the positive Boolean network explicitly in terms of the topology of the underlying interaction graph, where remarkably minor variation in local structures can drastically change the number of network communication classes. With general Boolean interaction rules, emergence of absorbing network Boolean dynamics is shown to be determined by the network structure with necessary and sufficient conditions established regarding when the Boolean gossip process defines absorbing Markov chains. Particularly, it is shown that for the majority of the Boolean interaction rules, except for nine out of the total @math possible nonempty sets of binary Boolean functions, whether the induced chain is absorbing has nothing to do with the topology of the underlying interaction graph, as long as connectivity is assumed. These results illustrate possibilities of relating dynamical properties of Boolean networks to graphical properties of the underlying interactions.
Gene Regulation . The evolution of gene expressions can be naturally described as a dynamical system where the two quantized levels, ON and OFF, are represented by logic states 1 and 0, respectively. Each gene normally would only interact with a small number of neighbouring genes Such number is two or three in Kauffman's original proposal @cite_11 . . Therefore, the proposed Boolean gossip network model at least serves as a good approximation for gene regulator networks, where a pair of genes interact at any given time and the Boolean function rules @math describe random outcomes of the interactions.
{ "cite_N": [ "@cite_11" ], "mid": [ "1971224531" ], "abstract": [ "Abstract Proto-organisms probably were randomly aggregated nets of chemical reactions. The hypothesis that contemporary organisms are also randomly constructed molecular automata is examined by modeling the gene as a binary (on-off) device and studying the behavior of large, randomly constructed nets of these binary “genes”. The results suggest that, if each “gene” is directly affected by two or three other “genes”, then such random nets: behave with great order and stability; undergo behavior cycles whose length predicts cell replication time as a function of the number of genes per cell; possess different modes of behavior whose number per net predicts roughly the number of cell types in an organism as a function of its number of genes; and under the stimulus of noise are capable of differentiating directly from any mode of behavior to at most a few other modes of behavior. Cellular differentation is modeled as a Markov chain among the modes of behavior of a genetic net. The possibility of a general theory of metabolic behavior is suggested." ] }
1507.03323
2618737689
This paper proposes and investigates a Boolean gossip model as a simplified but non-trivial probabilistic Boolean network. With positive node interactions, in view of standard theories from Markov chains, we prove that the node states asymptotically converge to an agreement at a binary random variable, whose distribution is characterized for large-scale networks by mean-field approximation. Using combinatorial analysis, we also successfully count the number of communication classes of the positive Boolean network explicitly in terms of the topology of the underlying interaction graph, where remarkably minor variation in local structures can drastically change the number of network communication classes. With general Boolean interaction rules, emergence of absorbing network Boolean dynamics is shown to be determined by the network structure with necessary and sufficient conditions established regarding when the Boolean gossip process defines absorbing Markov chains. Particularly, it is shown that for the majority of the Boolean interaction rules, except for nine out of the total @math possible nonempty sets of binary Boolean functions, whether the induced chain is absorbing has nothing to do with the topology of the underlying interaction graph, as long as connectivity is assumed. These results illustrate possibilities of relating dynamical properties of Boolean networks to graphical properties of the underlying interactions.
Virus Spreading . Virus spreading across a computer network can be modeled as a Boolean network, where @math and @math represent infected and healthy computers, respectively @cite_23 . The proposed Boolean gossip process may characterize more possibilities for two computers during an interaction: two computers, infected or not, are both infected ( @math ); two computers, infected or not, are both cured ( @math ), etc.
{ "cite_N": [ "@cite_23" ], "mid": [ "2118820208" ], "abstract": [ "The influence of the network characteristics on the virus spread is analyzed in a new-the N -intertwined Markov chain-model, whose only approximation lies in the application of mean field theory. The mean field approximation is quantified in detail. The N -intertwined model has been compared with the exact 2N-state Markov model and with previously proposed ldquohomogeneousrdquo or ldquolocalrdquo models. The sharp epidemic threshold tauc , which is a consequence of mean field theory, is rigorously shown to be equal to tauc = 1 (lambdamax(A)) , where lambdamax(A) is the largest eigenvalue-the spectral radius-of the adjacency matrix A . A continued fraction expansion of the steady-state infection probability at node j is presented as well as several upper bounds." ] }
1507.03323
2618737689
This paper proposes and investigates a Boolean gossip model as a simplified but non-trivial probabilistic Boolean network. With positive node interactions, in view of standard theories from Markov chains, we prove that the node states asymptotically converge to an agreement at a binary random variable, whose distribution is characterized for large-scale networks by mean-field approximation. Using combinatorial analysis, we also successfully count the number of communication classes of the positive Boolean network explicitly in terms of the topology of the underlying interaction graph, where remarkably minor variation in local structures can drastically change the number of network communication classes. With general Boolean interaction rules, emergence of absorbing network Boolean dynamics is shown to be determined by the network structure with necessary and sufficient conditions established regarding when the Boolean gossip process defines absorbing Markov chains. Particularly, it is shown that for the majority of the Boolean interaction rules, except for nine out of the total @math possible nonempty sets of binary Boolean functions, whether the induced chain is absorbing has nothing to do with the topology of the underlying interaction graph, as long as connectivity is assumed. These results illustrate possibilities of relating dynamical properties of Boolean networks to graphical properties of the underlying interactions.
The graphical nature of the model ) makes it possible to go beyond these existing work @cite_10 @cite_9 @cite_3 for more direct and explicit results. Additionally, majority Boolean dynamics @cite_19 and asynchronous broadcast gossiping @cite_4 are related to the model ) in the way that they describe Boolean interactions between one node and all its neighbors at a given time instant, in contrast to the gossip interaction rule which happens between one node and one of its selected neighbors.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_3", "@cite_19", "@cite_10" ], "mid": [ "2000985696", "1985331876", "2167637795", "1971118040", "1975766849" ], "abstract": [ "In this paper, we study the impact of edge weights on distances in sparse random graphs. We interpret these weights as delays and take them as independent and identically distributed exponential random variables. We analyze the weighted flooding time defined as the minimum time needed to reach all nodes from one uniformly chosen node and the weighted diameter corresponding to the largest distance between any pair of vertices. Under some standard regularity conditions on the degree sequence of the random graph, we show that these quantities grow as the logarithm of @math when the size of the graph @math tends to infinity. We also derive the exact value for the prefactor. These results allow us to analyze an asynchronous randomized broadcast algorithm for random regular graphs. Our results show that the asynchronous version of the algorithm performs better than its synchronized version: in the large size limit of the graph, it will reach the whole network faster even if the local dynamics are similar on average.", "Context-sensitive probabilistic Boolean networks (PBNs) have been recently introduced as a paradigm for modeling genetic regulatory networks and have served as the main model for the application of intervention methods, including optimal control strategies, to favorably effect system dynamics. Since it is believed that the steady-state behavior of a context-sensitive PBN is indicative of the phenotype, it is important to study the alternation in the steady-state probability distribution due to any variations in the formulations of the context-sensitive PBNs. Furthermore, the huge computational complexity of the context-sensitive PBN model necessitates generation of size-reduction techniques and approximate methods for calculation of the steady-state probability distribution of context-sensitive PBNs. The goal of this paper is threefold: i) to study the effects of the various definitions of context-sensitive PBNs on the steady-state probability distributions and the downstream control policy design; ii) to propose a reduction technique that maintains the steady-state probability distribution; and iii) to provide an approximation method for calculating the steady-state probability distribution of a context-sensitive PBN.", "Fine-scale models based on stochastic master equations can provide the most detailed description of the dynamics of gene expression and imbed, in principle, all the information about the biochemical reactions involved in gene interactions. However, there is limited time-series experimental data available for inference of such fine-scale models. Furthermore, the computational complexity involved in the design of optimal intervention strategies to favorably effect system dynamics for such detailed models is enormous. Thus, there is a need to design mappings from fine-scale models to coarse-scale models while maintaining sufficient structure for the problem at hand and to study the effect of intervention policies designed using coarse-scale models when applied to fine-scale models. In this paper, we propose a mapping from a fine-scale model represented by a stochastic master equation to a coarse-scale model represented by a probabilistic Boolean network that maintains the collapsed steady state probability distribution of the detailed model. We also derive bounds on the performance of the intervention strategy designed using the coarse-scale model when applied to the fine-scale model.", "A voter sits on each vertex of an infinite tree of degree @math , and has to decide between two alternative opinions. At each time step, each voter switches to the opinion of the majority of her neighbors. We analyze this majority process when opinions are initialized to independent and identically distributed random variables. In particular, we bound the threshold value of the initial bias such that the process converges to consensus. In order to prove an upper bound, we characterize the process of a single node in the large @math -limit. This approach is inspired by the theory of mean field spin-glass and can potentially be generalized to a wider class of models. We also derive a lower bound that is nontrivial for small, odd values of @math .", "Boolean networks form a class of disordered dynamical systems that have been studied in physics owing to their relationships with disordered systems in statistical mechanics and in biology as models of genetic regulatory networks. Recently they have been generalized to probabilistic Boolean networks (PBNs) to facilitate the incorporation of uncertainty in the model and to represent cellular context changes in biological modeling. In essence, a PBN is composed of a family of Boolean networks between which the PBN switches in a stochastic fashion. In whatever framework Boolean networks are studied, their most important attribute is their attractors. Left to run, a Boolean network will settle into one of a collection of state cycles called attractors. The set of states from which the network will transition into a specific attractor forms the basin of the attractor. The attractors represent the essential long-run behavior of the network. In a classical Boolean network, the network remains in an attractor once there; in a Boolean network with perturbation, the states form an ergodic Markov chain and the network can escape an attractor, but it will return to it or a different attractor unless interrupted by another perturbation; in a probabilistic Boolean network, so long as the PBN remains in one of its constituent Boolean networks it will behave as a Boolean network with perturbation, but upon a switch it will move to an attractor of the new constituent Boolean network. Given the ergodic nature of the model, the steady-state probabilities of the attractors are critical to network understanding. Heretofore they have been found by simulation; in this paper we derive analytic expressions for these probabilities, first for Boolean networks with perturbation and then for PBNs." ] }
1507.02874
2114575345
The focus of this paper is on the public communication required for generating a maximal-rate secret key (SK) within the multiterminal source model of Csiszar and Narayan. Building on the prior work of Tyagi for the two-terminal scenario, we derive a lower bound on the communication complexity, @math , defined to be the minimum rate of public communication needed to generate a maximal-rate SK. It is well known that the minimum rate of communication for omniscience, denoted by @math , is an upper bound on @math . For the class of pairwise independent network (PIN) models defined on uniform hypergraphs, we show that a certain Type @math condition, which is verifiable in polynomial time, guarantees that our lower bound on @math meets the @math upper bound. Thus, the PIN models satisfying our condition are @math -maximal, indicating that the upper bound @math holds with equality. This allows us to explicitly evaluate @math for such PIN models. We also give several examples of PIN models that satisfy our Type @math condition. Finally, we prove that for an arbitrary multiterminal source model, a stricter version of our Type @math condition implies that communication from all terminals (omnivocality) is needed for establishing an SK of maximum rate. For three-terminal source models, the converse is also true: omnivocality is needed for generating a maximal-rate SK only if the strict Type @math condition is satisfied. However, for the source models with four or more terminals, counterexamples exist showing that the converse does not hold in general.
In @cite_14 , study public communication for SK generation in another variant of the multiterminal source model. The authors consider @math terminals observing correlated i.i.d. sources. One terminal acts as the communicator, sending information to each of the remaining @math terminals via @math different noiseless channels. A communication rate-key rate tradeoff region is identified for this model. However, the model is of somewhat limited interest to us because of the fact that each of the @math different links have individual eavesdroppers, but co-operation is not allowed among them. Secrecy is no longer guaranteed if the eavesdroppers co-operate. Therefore, the problem setup is more of an amalgam of two-terminal problems rather than a truly multiterminal setup.
{ "cite_N": [ "@cite_14" ], "mid": [ "2602685089" ], "abstract": [ "A new model of multi-party secret key agreement is proposed, in which one terminal called the communicator can transmit public messages to other terminals before all terminals agree on a secret key. A single-letter characterization of the achievable region is derived in the stationary memoryless case. The new model generalizes some other (old and new) models of key agreement. In particular, key generation with an omniscient helper is the special case where the communicator knows all sources, for which we derive a zero-rate one-shot converse for the secret key per bit of communication." ] }
1507.02874
2114575345
The focus of this paper is on the public communication required for generating a maximal-rate secret key (SK) within the multiterminal source model of Csiszar and Narayan. Building on the prior work of Tyagi for the two-terminal scenario, we derive a lower bound on the communication complexity, @math , defined to be the minimum rate of public communication needed to generate a maximal-rate SK. It is well known that the minimum rate of communication for omniscience, denoted by @math , is an upper bound on @math . For the class of pairwise independent network (PIN) models defined on uniform hypergraphs, we show that a certain Type @math condition, which is verifiable in polynomial time, guarantees that our lower bound on @math meets the @math upper bound. Thus, the PIN models satisfying our condition are @math -maximal, indicating that the upper bound @math holds with equality. This allows us to explicitly evaluate @math for such PIN models. We also give several examples of PIN models that satisfy our Type @math condition. Finally, we prove that for an arbitrary multiterminal source model, a stricter version of our Type @math condition implies that communication from all terminals (omnivocality) is needed for establishing an SK of maximum rate. For three-terminal source models, the converse is also true: omnivocality is needed for generating a maximal-rate SK only if the strict Type @math condition is satisfied. However, for the source models with four or more terminals, counterexamples exist showing that the converse does not hold in general.
Turning our attention to the topic of omnivocality originally studied in our paper @cite_16 , @cite_25 have recently obtained some new results. In particular, their Theorem 5 gives a sufficient condition for when a particular terminal communicate in any SK-capacity-achieving protocol. Our original sufficient condition for omnivocality [Theorem 4] MKS14 (Theorem in this paper) can now be obtained as a consequence of 's Theorem 5. In addition, Theorem 4 of @cite_25 provides a sufficient condition that guarantees the existence of an SK-capacity-achieving protocol within which a given terminal can remain silent.
{ "cite_N": [ "@cite_16", "@cite_25" ], "mid": [ "2026910567", "1666661968" ], "abstract": [ "In this paper, we address the problem of characterizing the instances of the multiterminal source model of Csiszar and Narayan in which communication from all terminals is needed for establishing a secret key of maximum rate. We give an information-theoretic sufficient condition for identifying such instances. We believe that our sufficient condition is in fact an exact characterization, but we are only able to prove this in the case of the three-terminal source model.", "The problem of when all terminals must talk to achieve the secrecy capacity in the multiterminal source model is investigated. Two conditions under which respectively a given terminal does not need to and must talk to achieve the secrecy capacity are characterized. The cases when all terminals must talk to achieve secrecy capacity are shown to be many more than those conjectured in [1] for systems with four or more terminals. There is a gap between the above two conditions, in which whether a given terminal need to talk is not clear. A conjecture is further made in order to narrow down the gap." ] }
1507.02721
2229908240
We consider networks of processes which interact with beeps. Various beeping models are used. The basic one, defined by Cornejo and Kuhn [CK10], assumes that a process can choose either to beep or to listen; if it listens it can distinguish between silence or the presence of at least one beep. The aim of this paper is the study of the resolution of paradigms such as collision detection, computation of the degree of a vertex, colouring, or 2-hop-colouring in the framework of beeping models. For each of these problems we present Las Vegas or Monte Carlo algorithms and we analyse their complexities expressed in terms of the number of slots. We present also efficient randomised emulations of more powerful beeping models on the basic one. We illustrate emulation procedures with an efficient degree computation algorithm in the basic beeping model; this algorithm was given initially in a more powerful model.
@cite_17 , from considerations concerning the development of certain cells, studied the MIS problem in the discrete beeping model @math as presented in @cite_6 . They consider, in particular, the wake-on-beep model (sleeping nodes wake up upon receiving a beep) and sender-side collision detection @math : they give an @math rounds MIS algorithm. After this work, @cite_1 presents in the model @math a randomised algorithm with feedback mechanism whose expected time to compute a MIS is @math . A vertex @math is candidate for joining the independent set (and beeps) with a certain probability (initially @math ); this value is decreased by some fixed factor if at least one neighbour whishes also to join the independent set. It is increased by the same factor (up to maximum @math ) if neither @math nor any neighbour of @math are candidates.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_17" ], "mid": [ "2158711252", "2951682038", "2568091325" ], "abstract": [ "Maximal Independent Set selection is a fundamental problem in distributed computing. A novel probabilistic algorithm for this problem has recently been proposed by , inspired by the study of the way that developing cells in the fly become specialised. The algorithm they propose is simple and robust, but not as efficient as previous approaches: the expected time complexity is O(log2 n). Here we first show that the approach of cannot achieve better efficiency than this across all networks, no matter how the global probability values are chosen. However, we then propose a new algorithm that incorporates another important feature of the biological system: the probability value at each node is adapted using local feedback from neighbouring nodes. Our new algorithm retains all the advantages of simplicity and robustness, but also achieves the optimal efficiency of O(log n) expected time. The new algorithm also has only a constant message complexity per node.", "We present the communication model, which assumes nodes have minimal knowledge about their environment and severely limited communication capabilities. Specifically, nodes have no information regarding the local or global structure of the network, don't have access to synchronized clocks and are woken up by an adversary. Moreover, instead on communicating through messages they rely solely on carrier sensing to exchange information. We study the problem of , a variant of vertex coloring specially suited for the studied beeping model. Given a set of resources, the goal of interval coloring is to assign every node a large contiguous fraction of the resources, such that neighboring nodes share no resources. To highlight the importance of the discreteness of the model, we contrast it against a continuous variant described in [17]. We present an O(1 @math (T ) @math T @math O( n) @math (T ) @math O( n) @math ( n)$ on the time required to solve interval coloring for this model against randomized algorithms. This lower bound implies that our algorithm is asymptotically optimal for constant degree graphs.", "We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot, beeping nodes receive no feedback, while silent nodes can only differentiate between none of its neighbors beeping, or at least one of its neighbors beeping. We start by proving a lower bound that shows that in this model, it is not possible to locally converge to an MIS in sub-polynomial time. We then study four different relaxations of the model which allow us to circumvent the lower bound and find an MIS in polylogarithmic time. First, we show that if a polynomial upper bound on the network size is known, it is possible to find an MIS in ( O ( ^3 n) ) time. Second, if we assume sleeping nodes are awoken by neighboring beeps, then we can also find an MIS in ( O ( ^3 n) ) time. Third, if in addition to this wakeup assumption we allow sender-side collision detection, that is, beeping nodes can distinguish whether at least one neighboring node is beeping concurrently or not, we can find an MIS in ( O ( ^2 n) ) time. Finally, if instead we endow nodes with synchronous clocks, it is also possible to find an MIS in ( O ( ^2 n) ) time." ] }
1507.02721
2229908240
We consider networks of processes which interact with beeps. Various beeping models are used. The basic one, defined by Cornejo and Kuhn [CK10], assumes that a process can choose either to beep or to listen; if it listens it can distinguish between silence or the presence of at least one beep. The aim of this paper is the study of the resolution of paradigms such as collision detection, computation of the degree of a vertex, colouring, or 2-hop-colouring in the framework of beeping models. For each of these problems we present Las Vegas or Monte Carlo algorithms and we analyse their complexities expressed in terms of the number of slots. We present also efficient randomised emulations of more powerful beeping models on the basic one. We illustrate emulation procedures with an efficient degree computation algorithm in the basic beeping model; this algorithm was given initially in a more powerful model.
More generally, Navlakha and Bar-Joseph present in @cite_14 a general survey on similarities and differences between distributed computations in biological and computational systems and, in this framework, the importance of the beeping model.
{ "cite_N": [ "@cite_14" ], "mid": [ "2061084197" ], "abstract": [ "Exploring the similarities and differences between distributed computations in biological and computational systems." ] }
1507.02721
2229908240
We consider networks of processes which interact with beeps. Various beeping models are used. The basic one, defined by Cornejo and Kuhn [CK10], assumes that a process can choose either to beep or to listen; if it listens it can distinguish between silence or the presence of at least one beep. The aim of this paper is the study of the resolution of paradigms such as collision detection, computation of the degree of a vertex, colouring, or 2-hop-colouring in the framework of beeping models. For each of these problems we present Las Vegas or Monte Carlo algorithms and we analyse their complexities expressed in terms of the number of slots. We present also efficient randomised emulations of more powerful beeping models on the basic one. We illustrate emulation procedures with an efficient degree computation algorithm in the basic beeping model; this algorithm was given initially in a more powerful model.
In the model of point to point message passing, vertex colouring is mainly studied under two assumptions: - vertices have unique identifiers, and more generally, they have an initial colouring, - every vertex has the same initial state and initially only knows its own edges. If vertices have an initial colour, Kuhn and Wattenhofer @cite_19 have obtained efficient time complexity algorithms to obtain @math colours in the case where every vertex can only send its own current colour to all its neighbours. In @cite_33 , Johansson analyses a simple randomised distributed vertex colouring algorithm for anonymous graphs. He proves that this algorithm runs in @math rounds w.h.p. on graphs of size @math The size of each message is @math thus the bit complexity per channel of this algorithm is @math @cite_36 presents an optimal bit and time complexity Las Vegas distributed algorithm for colouring any anonymous graph in @math bit rounds w.h.p.
{ "cite_N": [ "@cite_36", "@cite_19", "@cite_33" ], "mid": [ "2062111496", "2109368894", "2056316084" ], "abstract": [ "We present and analyse a very simple randomised distributed vertex colouring algorithm for arbitrary graphs of size n that halts in time O(logn) with probability 1-o(n^-^1). Each message containing 1 bit, its bit complexity per channel is O(logn). From this algorithm, we deduce and analyse a randomised distributed vertex colouring algorithm for arbitrary graphs of maximum degree @D and size n that uses at most @D+1 colours and halts in time O(logn) with probability 1-o(n^-^1). We also obtain a partition algorithm for arbitrary graphs of size n that builds a spanning forest in time O(logn) with probability 1-o(n^-^1). We study some parameters such as the number, the size and the radius of trees of the spanning forest.", "Coloring the nodes of a graph with a small number of colors is one of the most fundamental problems in theoretical computer science. In this paper, we study graph coloring in a distributed setting. Processors of a distributed system are nodes of an undirected graph G. There is an edge between two nodes whenever the corresponding processors can directly communicate with each other. We assume that distributed coloring algorithms start with an initial m-coloring of G. In the paper, we prove new strong lower bounds for two special kinds of coloring algorithms. For algorithms which run for a single communication round---i.e., every node of the network can only send its initial color to all its neighbors---, we show that the number of colors of the computed coloring has to be at least Ω(Δ2 log2 Δ+ log log m). If such one-round algorithms are iteratively applied to reduce the number of colors step-by-step, we prove a time lower bound of Ω(Δ log2 Δ+ log*m) to obtain an O(Δ)-coloring. The best previous lower bounds for the two types of algorithms are Ω(log log m) and Ω(log*m), respectively.", "Abstract A very natural randomized algorithm for distributed vertex coloring of graphs is analyzed. Under the assumption that the random choices of processors are mutually independent, the execution time will be O(log n ) rounds almost always. A small modification of the algorithm is also proposed." ] }
1507.02721
2229908240
We consider networks of processes which interact with beeps. Various beeping models are used. The basic one, defined by Cornejo and Kuhn [CK10], assumes that a process can choose either to beep or to listen; if it listens it can distinguish between silence or the presence of at least one beep. The aim of this paper is the study of the resolution of paradigms such as collision detection, computation of the degree of a vertex, colouring, or 2-hop-colouring in the framework of beeping models. For each of these problems we present Las Vegas or Monte Carlo algorithms and we analyse their complexities expressed in terms of the number of slots. We present also efficient randomised emulations of more powerful beeping models on the basic one. We illustrate emulation procedures with an efficient degree computation algorithm in the basic beeping model; this algorithm was given initially in a more powerful model.
In @cite_6 , Cornejo and Kuhn study the interval colouring problem: an interval colouring assigns to each vertex an interval (contiguous fraction) of resources such that neighbouring vertices do not share resources (it is a variant of vertex colouring). They assume that each node knows its degree and an upper bound of the maximum degree @math of the graph. They present in the beeping model @math a probabilistic algorithm which never stops and stabilises with a correct @math -interval coloring in @math periods w.h.p., where: @math is the size of the graph, and a period is @math time slots with @math , thus it stabilises in @math slots.
{ "cite_N": [ "@cite_6" ], "mid": [ "2951682038" ], "abstract": [ "We present the communication model, which assumes nodes have minimal knowledge about their environment and severely limited communication capabilities. Specifically, nodes have no information regarding the local or global structure of the network, don't have access to synchronized clocks and are woken up by an adversary. Moreover, instead on communicating through messages they rely solely on carrier sensing to exchange information. We study the problem of , a variant of vertex coloring specially suited for the studied beeping model. Given a set of resources, the goal of interval coloring is to assign every node a large contiguous fraction of the resources, such that neighboring nodes share no resources. To highlight the importance of the discreteness of the model, we contrast it against a continuous variant described in [17]. We present an O(1 @math (T ) @math T @math O( n) @math (T ) @math O( n) @math ( n)$ on the time required to solve interval coloring for this model against randomized algorithms. This lower bound implies that our algorithm is asymptotically optimal for constant degree graphs." ] }
1507.02721
2229908240
We consider networks of processes which interact with beeps. Various beeping models are used. The basic one, defined by Cornejo and Kuhn [CK10], assumes that a process can choose either to beep or to listen; if it listens it can distinguish between silence or the presence of at least one beep. The aim of this paper is the study of the resolution of paradigms such as collision detection, computation of the degree of a vertex, colouring, or 2-hop-colouring in the framework of beeping models. For each of these problems we present Las Vegas or Monte Carlo algorithms and we analyse their complexities expressed in terms of the number of slots. We present also efficient randomised emulations of more powerful beeping models on the basic one. We illustrate emulation procedures with an efficient degree computation algorithm in the basic beeping model; this algorithm was given initially in a more powerful model.
@cite_15 presents and analyses Las Vegas distributed algorithms which compute a MIS or a maximal matching for anonymous rings (in the point to point message passing model). Their bit complexity and time complexity are @math w.h.p.
{ "cite_N": [ "@cite_15" ], "mid": [ "2151519507" ], "abstract": [ "We present and analyse Las Vegas distributed algorithms which compute a MIS or a maximal matching for anonymous rings. Their bit complexity and time complexity are O(logn) with high probability. These algorithms are optimal modulo a multiplicative constant. Beyond the complexity results, the interest of this work stands in the description and the analysis of these algorithms which may be easily generalised. Furthermore, these results show a separation between the complexity of the MIS problem (and of the maximal matching problem) on the one hand and the colouring problem on the other. Colouring can be computed only in @W(logn) rounds on rings with high probability, while MIS is shown to have a faster algorithm. This is in contrast to other models, in which MIS is at least as hard as colouring." ] }
1507.02721
2229908240
We consider networks of processes which interact with beeps. Various beeping models are used. The basic one, defined by Cornejo and Kuhn [CK10], assumes that a process can choose either to beep or to listen; if it listens it can distinguish between silence or the presence of at least one beep. The aim of this paper is the study of the resolution of paradigms such as collision detection, computation of the degree of a vertex, colouring, or 2-hop-colouring in the framework of beeping models. For each of these problems we present Las Vegas or Monte Carlo algorithms and we analyse their complexities expressed in terms of the number of slots. We present also efficient randomised emulations of more powerful beeping models on the basic one. We illustrate emulation procedures with an efficient degree computation algorithm in the basic beeping model; this algorithm was given initially in a more powerful model.
Emek and Wattenhofer introduce in @cite_18 a model for distributed computations which resembles the beeping model: networked finite state machines (nFSM for short). This model enables the sending of the same message to all neighbours of a vertex; however it is asynchronous, the states of vertices belong to a finite set, the degree of vertices is bounded and the set of messages is also finite. In the nFSM model they give a @math -MIS algorithm for graphs of size @math using a set of messages of size @math with a time complexity equal to @math
{ "cite_N": [ "@cite_18" ], "mid": [ "2052474723" ], "abstract": [ "A new model that depicts a network of randomized finite state machines operating in an asynchronous environment is introduced. This model, that can be viewed as a hybrid of the message passing model and cellular automata is suitable for applying the distributed computing lens to the study of networks of sub-microprocessor devices, e.g., biological cellular networks and man-made nano-networks. Although the computation and communication capabilities of each individual device in the new model are, by design, much weaker than those of an abstract computer, we show that some of the most important and extensively studied distributed computing problems can still be solved efficiently." ] }
1507.02992
810313514
Nowadays, the use of agile software development methods like Scrum is common in industry and academia. Considering the current attacking landscape, it is clear that developing secure software should be a main concern in all software development projects. In traditional software projects, security issues require detailed planning in an initial planning phase, typically resulting in a detailed security analysis (e.g., threat and risk analysis), a security architecture, and instructions for security implementation (e.g., specification of key sizes and cryptographic algorithms to use). Agile software development methods like Scrum are known for reducing the initial planning phases (e.g., sprint 0 in Scrum) and for focusing more on producing running code. Scrum is also known for allowing fast adaption of the emerging software to changes of customer wishes. For security, this means that it is likely that there are no detailed security architecture or security implementation instructions from the start of the project. It also means that a lot of design decisions will be made during the runtime of the project. Hence, to address security in Scrum, it is necessary to consider security issues throughout the whole software development process. Secure Scrum is a variation of the Scrum framework with special focus on the development of secure software throughout the whole software development process. It puts emphasis on implementation of security related issues without the need of changing the underlying Scrum process or influencing team dynamics. Secure Scrum allows even non- security experts to spot security issues, to implement security features, and to verify implementations. A field test of Secure Scrum shows that the security level of software developed using Secure Scrum is higher then the security level of software developed using standard Scrum.
There are several methods for achieving software security, e.g., Clean Room @cite_5 , Construction by Correctness @cite_11 , CMMI-DEV @cite_14 @cite_12 , etc. However, these methods cannot be used in Scrum as they clash with the characteristics of agile software development and specifically Scrum. Construction by Correctness @cite_11 for example, advocate formal development in planning, verification and testing. This is completely different to agility and flexible approaches like agile methodologies. Other models like CMMI-DEV @cite_14 @cite_12 can deal with agile methods, but they are process models. The main difference is that CMMI focuses on processes and agile development on the developers @cite_12 . This means that Scrum and other agile methodologies are developer centric, while CMMI is more process oriented. Concepts like Microsoft SDL @cite_0 are designed to integrate agile methodologies, but is also self-contained. It can not be plugged into Scrum or any other agile methodology. Scrum focuses on rich communication, self-organisation, and collaboration between the involved project members. This conflicts with formalistic and rigid concepts.
{ "cite_N": [ "@cite_14", "@cite_0", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "", "2126762719", "1561923393", "2103612079", "2116352900" ], "abstract": [ "", "This introduction to the Security Development Lifecycle (SDL) provides a history of the methodology and guides you through each stage of a proven process-from design to release-that helps minimize security defects.", "Cleanroom software engineering is a theory-based, team-oriented engineering process for developing and certifying very high quality software under statistical quality control. (The name “Cleanroom” was chosen in analogy to the precision engineering of hardware cleanrooms.) Cleanroom software engineering methods include box structure specification and design, function-theoretic correctness verification, incremental development, and usage-based statistical testing for certification of software fitness for use. Cleanroom teams are organized into specification, development, and certification (testing) roles. The Cleanroom process originated in IBM in the mid-1980s to bring engineering rigor to software development. Cleanroom software engineering has been applied with excellent results in a variety of system developments, and continues to evolve as an engineering technology. Keywords: statistical quality control; software; cleanroom management process; cleanroom software development; verification; cleanroom; certification; results", "iv 1 Problem Definition 1 2 Origins from Two Extremes 3 2.1 The Origins of Agile Methods 3 2.2 The Origins of CMMI 5 3 Factors that Affect Perception 7 3.1 Misuse 7 3.2 Lack of Accurate Information 8 3.3 Terminology Difficulties 9 4 The Truth About CMMI 11 4.1 CMMI Is a Model, Not a Process Standard 11 4.2 Process Areas, Not Processes 13 4.3 SCAMPI Appraisals 14 5 The Truth About Agile 16 6 There Is Value in Both Paradigms 20 6.1 Challenges When Using Agile 20 6.2 Challenges When Using CMMI 22 6.3 Current Published Research 23 6.4 The Hope in Recent Trends 24 7 Problems Not Solved by CMMI nor Agile 27", "Praxis Critical Systems recently developed a secure certification authority for smart cards that had to satisfy performance and usability requirements while meeting stringent security constraints. The authors used a systematic process from requirements elicitation through formal specification, user interface prototyping, rigorous design, and coding to ensure these objectives' achievement. They show how a process that achieves normal commercial productivity can deliver a highly reliable system that meets all its throughput and usability goals." ] }
1507.02992
810313514
Nowadays, the use of agile software development methods like Scrum is common in industry and academia. Considering the current attacking landscape, it is clear that developing secure software should be a main concern in all software development projects. In traditional software projects, security issues require detailed planning in an initial planning phase, typically resulting in a detailed security analysis (e.g., threat and risk analysis), a security architecture, and instructions for security implementation (e.g., specification of key sizes and cryptographic algorithms to use). Agile software development methods like Scrum are known for reducing the initial planning phases (e.g., sprint 0 in Scrum) and for focusing more on producing running code. Scrum is also known for allowing fast adaption of the emerging software to changes of customer wishes. For security, this means that it is likely that there are no detailed security architecture or security implementation instructions from the start of the project. It also means that a lot of design decisions will be made during the runtime of the project. Hence, to address security in Scrum, it is necessary to consider security issues throughout the whole software development process. Secure Scrum is a variation of the Scrum framework with special focus on the development of secure software throughout the whole software development process. It puts emphasis on implementation of security related issues without the need of changing the underlying Scrum process or influencing team dynamics. Secure Scrum allows even non- security experts to spot security issues, to implement security features, and to verify implementations. A field test of Secure Scrum shows that the security level of software developed using Secure Scrum is higher then the security level of software developed using standard Scrum.
S-Scrum @cite_6 is a security enhanced version of Scrum . It modifies the Scrum process by inserting so called spikes. A spike contains analysis, design and verification related to security concerns. Further, requirements engineering (RE) in story gathering takes effect on this process. For this, the authors describe to use tools like Misuse Stories @cite_1 . This approach is very formalistic and needs lot of changes to standard Scrum, hence hinders deployment in environments already using Scrum. Secure Scrum in contrast does not change standard Scrum.
{ "cite_N": [ "@cite_1", "@cite_6" ], "mid": [ "2153177282", "1563103026" ], "abstract": [ "Use case diagrams (L. , 1992) have proven quite helpful in requirements engineering, both for eliciting requirements and getting a better overview of requirements already stated. However, not all kinds of requirements are equally well supported by use case diagrams. They are good for functional requirements, but poorer at e.g., security requirements, which often concentrate on what should not happen in the system. With the advent of e- and m-commerce applications, security requirements are growing in importance, also for quite simple applications where a short lead time is important. Thus, it would be interesting to look into the possibility for applying use cases on this arena. The paper suggests how this can be done, extending the diagrams with misuse cases. This new construct makes it possible to represent actions that the system should prevent, together with those actions which it should support.", "To care for security in early stages of software development has always been a major engineering trend. However, due to the existence of unpreventable and accidental security faults within the system, it is not always possible to entirely identify and mitigate the security threats. This may eventually lead to security failure of the target system. To avoid security failure, it is required to incorporate fault tolerance (i.e. intrusion tolerant) into the security requirements of the system. In this paper, we propose a new technique toward description of security requirements of Intrusion Tolerant Systems (ITS) using fuzzy logic. We care for intrusion tolerance in security requirements of the system through considering partial satisfaction of security goals. This partiality is accepted and formally described through establishment of a Goal-Based Fuzzy Grammar (GFG) and its respective Goal-Based Fuzzy Language (GFL) for describing Security Requirement Model (SRM) of the target ITS." ] }
1507.02992
810313514
Nowadays, the use of agile software development methods like Scrum is common in industry and academia. Considering the current attacking landscape, it is clear that developing secure software should be a main concern in all software development projects. In traditional software projects, security issues require detailed planning in an initial planning phase, typically resulting in a detailed security analysis (e.g., threat and risk analysis), a security architecture, and instructions for security implementation (e.g., specification of key sizes and cryptographic algorithms to use). Agile software development methods like Scrum are known for reducing the initial planning phases (e.g., sprint 0 in Scrum) and for focusing more on producing running code. Scrum is also known for allowing fast adaption of the emerging software to changes of customer wishes. For security, this means that it is likely that there are no detailed security architecture or security implementation instructions from the start of the project. It also means that a lot of design decisions will be made during the runtime of the project. Hence, to address security in Scrum, it is necessary to consider security issues throughout the whole software development process. Secure Scrum is a variation of the Scrum framework with special focus on the development of secure software throughout the whole software development process. It puts emphasis on implementation of security related issues without the need of changing the underlying Scrum process or influencing team dynamics. Secure Scrum allows even non- security experts to spot security issues, to implement security features, and to verify implementations. A field test of Secure Scrum shows that the security level of software developed using Secure Scrum is higher then the security level of software developed using standard Scrum.
Another approach is described in @cite_4 . It introduces a Security Backlog beside the Product Backlog and Sprint Backlog. Together with this artifact, they introduce a new role. The security master should be responsible for this new Backlog. This approach introduces an expert, describes the security aware parts in the backlog, and is adapted to the Scrum process. However, it lacks flexibility (as described in the introduction) and does not fit naturally in a grown Scrum team. Also, the introduction of a new role changes the management of the project. With this approach it is not possible to interconnect standard Scrum user stories with the introduced security related stories. Secure Scrum in contrast keeps the connect between security issues and user stories of the Product Backlog respectively tasks of the Sprint Backlog.
{ "cite_N": [ "@cite_4" ], "mid": [ "2087542257" ], "abstract": [ "The rapid development of software nowadays requires the high speed software product delivery by development teams. In order to deliver the product faster, the development teams make a transformation to their conventional software development lifecycle to agile development method which can enable them towards speedy delivery of software coping with the requirements-change phenomenon. In this scenario, one of the most popular techniques in Agile development is the Scrum methodology which has been criticised in term of its security aspect cycle that ignores the security risk management activity. However, the current practices suggest that security should be considered during all stages of the software development life cycle. In order to address the aforementioned issue, this paper proposes the integration of security principles in development phases using scrum and suggests the element of security backlog that can be used as security features analysis and implementation in scrum phases." ] }
1507.02992
810313514
Nowadays, the use of agile software development methods like Scrum is common in industry and academia. Considering the current attacking landscape, it is clear that developing secure software should be a main concern in all software development projects. In traditional software projects, security issues require detailed planning in an initial planning phase, typically resulting in a detailed security analysis (e.g., threat and risk analysis), a security architecture, and instructions for security implementation (e.g., specification of key sizes and cryptographic algorithms to use). Agile software development methods like Scrum are known for reducing the initial planning phases (e.g., sprint 0 in Scrum) and for focusing more on producing running code. Scrum is also known for allowing fast adaption of the emerging software to changes of customer wishes. For security, this means that it is likely that there are no detailed security architecture or security implementation instructions from the start of the project. It also means that a lot of design decisions will be made during the runtime of the project. Hence, to address security in Scrum, it is necessary to consider security issues throughout the whole software development process. Secure Scrum is a variation of the Scrum framework with special focus on the development of secure software throughout the whole software development process. It puts emphasis on implementation of security related issues without the need of changing the underlying Scrum process or influencing team dynamics. Secure Scrum allows even non- security experts to spot security issues, to implement security features, and to verify implementations. A field test of Secure Scrum shows that the security level of software developed using Secure Scrum is higher then the security level of software developed using standard Scrum.
In @cite_16 an informal game (Protection Poker) is used to estimate security risks to explain security requirements to the developer team. The related case study shows that this is a possible way to integrate security awareness into Scrum. It solves the problem of requirements engineering with focus on . However, it does not provide a solution for the implementation and verification phase of software development, hence it is incomplete. Secure Scrum in contrast provides a solution for all phases of software development.
{ "cite_N": [ "@cite_16" ], "mid": [ "2028813279" ], "abstract": [ "Without infinite resources, software development teams must prioritize security fortification efforts to prevent the most damaging attacks. The Protection Poker \"game\" is a collaborative means for guiding this prioritization and has the potential to improve software security practices and team software security knowledge." ] }
1507.02746
2218978613
In this paper we consider the pairwise kidney exchange game. This game naturally appears in situations that some service providers benefit from pairwise allocations on a network, such as the kidney exchanges between hospitals. present a @math -approximation randomized truthful mechanism for this problem. This is the best known result in this setting with multiple players. However, we note that the variance of the utility of an agent in this mechanism may be as large as @math , which is not desirable in a real application. In this paper we resolve this issue by providing a @math -approximation randomized truthful mechanism in which the variance of the utility of each agent is at most @math . Interestingly, we could apply our technique to design a deterministic mechanism such that, if an agent deviates from the mechanism, she does not gain more than @math . We call such a mechanism an almost truthful mechanism. Indeed, in a practical scenario, an almost truthful mechanism is likely to imply a truthful mechanism. We believe that our approach can be used to design low risk or almost truthful mechanisms for other problems.
Achieving social welfare optimal mechanisms, which are truthful, is thus not possible. However, achieving approximate truthful mechanisms may be possible. @cite_4 used the same example as in Figure to show that there is no deterministic truthful mechanism for the kidney-exchange game, with approximation ratio better than @math . Moreover, they show that there is no randomized truthful mechanism with an approximation ratio better than @math . They also introduce a deterministic @math -approximation truthful mechanism for the two player kidney exchange game and a randomize @math -approximation truthful mechanism for the multi-agent kidney exchange game. Later @cite_7 improved the approximation ratio for two agents to an expected @math -approximation truthful mechanism. It is conjectured that there is no deterministic constant-approximation truthful mechanism for the multi-agent kidney exchange game, even for three agents @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_7" ], "mid": [ "1989795702", "2064135158" ], "abstract": [ "As kidney exchange programs are growing, manipulation by hospitals becomes more of an issue. Assuming that hospitals wish to maximize the number of their own patients who receive a kidney, they may have an incentive to withhold some of their incompatible donor–patient pairs and match them internally, thus harming social welfare. We study mechanisms for two-way exchanges that are strategyproof, i.e., make it a dominant strategy for hospitals to report all their incompatible pairs. We establish lower bounds on the welfare loss of strategyproof mechanisms, both deterministic and randomized, and propose a randomized mechanism that guarantees at least half of the maximum social welfare in the worst case. Simulations using realistic distributions for blood types and other parameters suggest that in practice our mechanism performs much closer to optimal.", "We study a mechanism design version of matching computation in graphs that models the game played by hospitals participating in pairwise kidney exchange programs. We present a new randomized matching mechanism for two agents which is truthful in expectation and has an approximation ratio of 3 2 to the maximum cardinality matching. This is an improvement over a recent upper bound of 2 (, 2010 2]) and, furthermore, our mechanism beats for the first time the lower bound on the approximation ratio of deterministic truthful mechanisms. We complement our positive result with new lower bounds. Among other statements, we prove that the weaker incentive compatibility property of truthfulness in expectation in our mechanism is necessary; universally truthful mechanisms that have an inclusion-maximality property have an approximation ratio of at least 2." ] }
1507.02746
2218978613
In this paper we consider the pairwise kidney exchange game. This game naturally appears in situations that some service providers benefit from pairwise allocations on a network, such as the kidney exchanges between hospitals. present a @math -approximation randomized truthful mechanism for this problem. This is the best known result in this setting with multiple players. However, we note that the variance of the utility of an agent in this mechanism may be as large as @math , which is not desirable in a real application. In this paper we resolve this issue by providing a @math -approximation randomized truthful mechanism in which the variance of the utility of each agent is at most @math . Interestingly, we could apply our technique to design a deterministic mechanism such that, if an agent deviates from the mechanism, she does not gain more than @math . We call such a mechanism an almost truthful mechanism. Indeed, in a practical scenario, an almost truthful mechanism is likely to imply a truthful mechanism. We believe that our approach can be used to design low risk or almost truthful mechanisms for other problems.
Almost truthful mechanisms has been widely studied (See @cite_3 , @cite_2 and @cite_6 ) with slightly different definitions. However, all use the concept that an agent should not gain more than small amount by deviating from the truthful mechanism.
{ "cite_N": [ "@cite_6", "@cite_3", "@cite_2" ], "mid": [ "2258240916", "1622653397", "2117943181" ], "abstract": [ "This paper deals with the implementation of Social Choice Functions in fair multiagent decision problems. In such problems the determination of the best alternatives often relies on the maximization of a non-utilitarian Social Welfare Function so as to account for equity. However, in such decision processes, agents may have incentive to misreport their preferences to obtain more favorable choices. It is well known that, for Social Choice Functions based on the maximization of an affine aggregator of individual utilities, we can preclude any manipulation by introducing payments (VCG mechanisms). Unfortunately such truthful mechanisms do not exist for non-affine maximizers (Roberts' Theorem). For this reason, we introduce here a notion of \"almost-truthfulness\" and investigate the existence of payments enabling the elaboration of almost-truthful mechanisms for non-additive Social Welfare Functions such as Social Gini Evaluation Functions used in fair optimization.", "This manuscript presents an alternative implementation of the truthful-in-expectation mechanism of Dughmi, Roughgarden and Yan for combinatorial auctions with weighted-matroid-rank-sum valuations. The new implementation uses only value queries and is approximately truthful-in-expectation, in the sense that by reporting truthfully each agent maximizes his utility within a multiplicative 1-o(1) factor. It still provides an optimal (1-1 e-o(1))-approximation in social welfare. We achieve this by first presenting an approximately maximal-in-distributional-range allocation rule and then showing a black-box transformation to an approximately truthful-in-expectation mechanism.", "We present an approximately-efficient and approximately-strategyproof auction mechanism for a single-good multi-unit allocation problem. The bidding language in our auctions allows marginal-decreasing piecewise constant curves. First, we develop a fully polynomial-time approximation scheme for the multi-unit allocation problem, which computes a (1+e)≈ in worst-case time T = O(n3 e), given n bids each with a constant number of pieces. Second, we embed this approximation scheme within a Vickrey-Clarke-Groves (VCG) mechanism and compute payments to n agents for an asymptotic cost of O(T log n). The maximal possible gain from manipulation to a bidder in the combined scheme is bounded by e (1+e) V, where V is the total surplus in the efficient outcome." ] }
1507.03176
2245009391
Nonnegative Matrix Factorization (NMF) aims to factorize a matrix into two optimized nonnegative matrices appropriate for the intended applications. The method has been widely used for unsupervised learning tasks, including recommender systems (rating matrix of users by items) and document clustering (weighting matrix of papers by keywords). However, traditional NMF methods typically assume the number of latent factors (i.e., dimensionality of the loading matrices) to be fixed. This assumption makes them inflexible for many applications. In this paper, we propose a nonparametric NMF framework to mitigate this issue by using dependent Indian Buffet Processes (dIBP). In a nutshell, we apply a correlation function for the generation of two stick weights associated with each pair of columns of loading matrices, while still maintaining their respective marginal distribution specified by IBP. As a consequence, the generation of two loading matrices will be column-wise (indirectly) correlated. Under this same framework, two classes of correlation function are proposed (1) using Bivariate beta distribution and (2) using Copula function. Both methods allow us to adopt our work for various applications by flexibly choosing an appropriate parameter settings. Compared with the other state-of-the art approaches in this area, such as using Gaussian Process (GP)-based dIBP, our work is seen to be much more flexible in terms of allowing the two corresponding binary matrix columns to have greater variations in their non-zero entries. Our experiments on the real-world and synthetic datasets show that three proposed models perform well on the document clustering task comparing standard NMF without predefining the dimension for the factor matrices, and the Bivariate beta distribution-based and Copula-based models have better flexibility than the GP-based model.
The nonparametric extension of NMF mainly relies on the machinery of stochastic processes. The Beta process is used as the prior of one factor matrix in @cite_19 . The Gamma process is used to generate the coefficients for the combination of corresponding elements in factor matrices rather than the prior of the factor matrices @cite_2 . Both have been successfully applied for music analysis. IBP is used as the prior of one factor matrix and another factor matrix is drawn from Gaussian distribution; an efficient inference method (Power-EP) is proposed for this model @cite_15 . These processes can be considered as the extension of the latent feature factor model @cite_22 . However, there is no work to place the priors for two factor matrices simultaneously.
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_22", "@cite_2" ], "mid": [ "2407740810", "2605884778", "2104827998", "2113526703" ], "abstract": [ "Nonnegative matrix factorization (NMF) has been widely used for discovering physically meaningful latent components in audio signals to facilitate source separation. Most of the existing NMF algorithms require that the number of latent components is provided a priori, which is not always possible. In this paper, we leverage developments from the Bayesian nonparametrics and compressive sensing literature to propose a probabilistic Beta Process Sparse NMF (BP-NMF) model, which can automatically infer the proper number of latent components based on the data. Unlike previous models, BP-NMF explicitly assumes that these latent components are often completely silent. We derive a novel mean-field variational inference algorithm for this nonconjugate model and evaluate it on both synthetic data and real recordings on various tasks.", "", "The Indian buffet process is a stochastic process defining a probability distribution over equivalence classes of sparse binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior in probabilistic models that represent objects using a potentially infinite array of features, or that involve bipartite graphs in which the size of at least one class of nodes is unknown. We give a detailed derivation of this distribution, and illustrate its use as a prior in an infinite latent feature model. We then review recent applications of the Indian buffet process in machine learning, discuss its extensions, and summarize its connections to other stochastic processes.", "Recent research in machine learning has focused on breaking audio spectrograms into separate sources of sound using latent variable decompositions. These methods require that the number of sources be specified in advance, which is not always possible. To address this problem, we develop Gamma Process Nonnegative Matrix Factorization (GaP-NMF), a Bayesian nonparametric approach to decomposing spectrograms. The assumptions behind GaP-NMF are based on research in signal processing regarding the expected distributions of spectrogram data, and GaP-NMF automatically discovers the number of latent sources. We derive a mean-field variational inference algorithm and evaluate GaP-NMF on both synthetic data and recorded music." ] }
1507.02801
2243297270
A mixture of factor analyzers is a semi-parametric density estimator that generalizes the well-known mixtures of Gaussians model by allowing each Gaussian in the mixture to be represented in a dierent lower-dimensional manifold. This paper presents a robust and parsimonious model selection algorithm for training a mixture of factor analyzers, carrying out simultaneous clustering and locally linear, globally nonlinear dimensionality reduction. Permitting dierent number of factors per mixture component, the algorithm adapts the model complexity to the data complexity. We compare the proposed algorithm with related automatic model selection algorithms on a number of benchmarks. The results indicate the eectiveness of this fast
There are numerous studies for mixture model class selection. These include using information theoretical trade-offs between likelihood and model complexity @cite_27 @cite_18 @cite_8 @cite_36 @cite_39 , greedy approaches @cite_19 @cite_23 and full Bayesian treatment of the problem @cite_21 @cite_25 @cite_33 @cite_5 . A brief review of related automatic model selection methods is given in Table , a detailed treatment can be found in @cite_6 . Here we provide some detail on the most relevant automatic model selection methods that are closely related to our work.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_8", "@cite_36", "@cite_21", "@cite_6", "@cite_39", "@cite_19", "@cite_27", "@cite_23", "@cite_5", "@cite_25" ], "mid": [ "2168175751", "2128012710", "2106596127", "2034230784", "", "2021137021", "170307911", "2140136927", "2142635246", "2009391063", "2083712543", "2120636621" ], "abstract": [ "", "Clustering is a fundamental task in many vision applications. To date, most clustering algorithms work in a batch setting and training examples must be gathered in a large group before learning can begin. Here we explore incremental clustering, in which data can arrive continuously. We present a novel incremental model-based clustering algorithm based on nonparametric Bayesian methods, which we call memory bounded variational Dirichlet process (MB-VDP). The number of clusters are determined flexibly by the data and the approach can be used to automatically discover object categories. The computational requirements required to produce model updates are bounded and do not grow with the amount of data processed. The technique is well suited to very large datasets, and we show that our approach outperforms existing online alternatives for learning nonparametric Bayesian mixture models.", "of the number of bits required to write down the observed data, has been reformulated to extend the classical maximum likelihood principle. The principle permits estimation of the number of the parameters in statistical models in addition to their values and even of the way the parameters appear in the models; i.e., of the model structures. The principle rests on a new way to interpret and construct a universal prior distribution for the integers, which makes sense even when the parameter is an individual object. Truncated realvalued parameters are converted to integers by dividing them by their precision, and their prior is determined from the universal prior for the integers by optimizing the precision. 1. Introduction. In this paper we study estimation based upon the principle of minimizing the total number of binary digits required to rewrite the observed data, when each observation is given with some precision. Instead of attempting at an absolutely shortest description, which would be futile, we look for the optimum relative to a class of parametrically given distributions. This Minimum Description Length (MDL) principle, which we introduced in a less comprehensive form in [25], turns out to degenerate to the more familiar Maximum Likelihood (ML) principle in case the number of parameters in the models is fixed, so that the description length of the parameters themselves can be ignored. In another extreme case, where the parameters determine the data, it similarly degenerates to Jaynes's principle of maximum entropy, [14]. But the main power of the new criterion is that it permits estimates of the entire model, its parameters, their number, and even the way the parameters appear in the model; i.e., the model structure. Hence, there will be no need to supplement the estimated parameters with a separate hypothesis test to decide whether a model is adequately parameterized or, perhaps, over parameterized.", "Summary form only. Inspired by Kolmogorov's structure function for finite sets as models of data in the algorithmic theory of information we adapt the construct to families of probability models to avoid the noncomputability problem. The picture of modeling looks then as follows: The models in the family have a double index, where the first specifies a structure, ranging over a finite or a countable set, and the second consists of parameter values, ranging over a continuum. An optimal structure index can be determined by the MDL (Minimum Description Length) principle in a two-part code, where the sum of the code lengths for the structure and the data is minimized. The latter is obtained from the universal NML (Normalized Maximum Likelihood) model for the subfamily of models having a specified structure. The determination of the optimal model in the optimized structure is more difficult. It requires a partition of the parameter space into equivalence classes, each associated with a model, in such a way that the Kullback-Leibler distance between any two adjacent models is equal and that the models are optimally distinguishable from the given amount of data. This notion of distinguishability is a modification of a related idea of Balasubramanian. The particular model, specified by the observed data, is the simplest one that incorporates all the properties in the data that can be extracted with the model class considered.", "", "Model-based clustering is a popular tool which is renowned for its probabilistic foundations and its flexibility. However, high-dimensional data are nowadays more and more frequent and, unfortunately, classical model-based clustering techniques show a disappointing behavior in high-dimensional spaces. This is mainly due to the fact that model-based clustering methods are dramatically over-parametrized in this case. However, high-dimensional spaces have specific characteristics which are useful for clustering and recent techniques exploit those characteristics. After having recalled the bases of model-based clustering, dimension reduction approaches, regularization-based techniques, parsimonious modeling, subspace clustering methods and clustering methods based on variable selection are reviewed. Existing softwares for model-based clustering of high-dimensional data will be also reviewed and their practical use will be illustrated on real-world data sets.", "SUMMARY The systematic variation within a set of data, as represented by a usual statistical model, may be used to encode the data in a more compact form than would be possible if they were considered to be purely random. The encoded form has two parts. The first states the inferred estimates of the unknown parameters in the model, the second states the data using an optimal code based on the data probability distribution implied by those parameter estimates. Choosing the model and the estimates that give the most compact coding leads to an interesting general inference procedure. In its strict form it has great generality and several nice properties but is computationally infeasible. An approximate form is developed and its relation to other methods is explored.", "This article concerns the greedy learning of gaussian mixtures. In the greedy approach, mixture components are inserted into the mixture one after the other. We propose a heuristic for searching for the optimal component to insert. In a randomized manner, a set of candidate new components is generated. For each of these candidates, we find the locally optimal new component and insert it into the existing mixture. The resulting algorithm resolves the sensitivity to initialization of state-of-the-art methods, like expectation maximization, and has running time linear in the number of data points and quadratic in the (final) number of mixture components. Due to its greedy nature, the algorithm can be particularly useful when the optimal number of mixture components is unknown. Experimental results comparing the proposed algorithm to other methods on density estimation and texture segmentation are provided.", "The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identification. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are several competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples.", "A mixture of factor analyzer is a semiparametric density estimator that performs clustering and dimensionality reduction in each cluster (component) simultaneously. It performs nonlinear dimensionality reduction by modeling the density as a mixture of local linear models. The approach can be used for classification by modeling each class-conditional density using a mixture model and the complete data is then a mixture of mixtures. We propose an incremental mixture of factor analysis algorithm where the number of components (local models) in the mixture and the number of factors in each component (local dimensionality) are determined adaptively. Our results on different pattern classification tasks prove the utility of our approach and indicate that our algorithms find a good trade-off between model complexity and accuracy.", "Three Bayesian related approaches, namely, variational Bayesian (VB), minimum message length (MML) and Bayesian Ying-Yang (BYY) harmony learning, have been applied to automatically determining an appropriate number of components during learning Gaussian mixture model (GMM). This paper aims to provide a comparative investigation on these approaches with not only a Jeffreys prior but also a conjugate Dirichlet-Normal-Wishart (DNW) prior on GMM. In addition to adopting the existing algorithms either directly or with some modifications, the algorithm for VB with Jeffreys prior and the algorithm for BYY with DNW prior are developed in this paper to fill the missing gap. The performances of automatic model selection are evaluated through extensive experiments, with several empirical findings: 1) Considering priors merely on the mixing weights, each of three approaches makes biased mistakes, while considering priors on all the parameters of GMM makes each approach reduce its bias and also improve its performance. 2) As Jeffreys prior is replaced by the DNW prior, all the three approaches improve their performances. Moreover, Jeffreys prior makes MML slightly better than VB, while the DNW prior makes VB better than MML. 3) As the hyperparameters of DNW prior are further optimized by each of its own learning principle, BYY improves its performances while VB and MML deteriorate their performances when there are too many free hyper-parameters. Actually, VB and MML lack a good guide for optimizing the hyper-parameters of DNW prior. 4) BYY considerably outperforms both VB and MML for any type of priors and whether hyper-parameters are optimized. Being different from VB and MML that rely on appropriate priors to perform model selection, BYY does not highly depend on the type of priors. It has model selection ability even without priors and performs already very well with Jeffreys prior, and incrementally improves as Jeffreys prior is replaced by the DNW prior. Finally, all algorithms are applied on the Berkeley segmentation database of real world images. Again, BYY considerably outperforms both VB and MML, especially in detecting the objects of interest from a confusing background.", "In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the \"right\" number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling." ] }
1507.02801
2243297270
A mixture of factor analyzers is a semi-parametric density estimator that generalizes the well-known mixtures of Gaussians model by allowing each Gaussian in the mixture to be represented in a dierent lower-dimensional manifold. This paper presents a robust and parsimonious model selection algorithm for training a mixture of factor analyzers, carrying out simultaneous clustering and locally linear, globally nonlinear dimensionality reduction. Permitting dierent number of factors per mixture component, the algorithm adapts the model complexity to the data complexity. We compare the proposed algorithm with related automatic model selection algorithms on a number of benchmarks. The results indicate the eectiveness of this fast
In one of the most popular model selection approaches for Gaussian mixture models (GMMs), Figueiredo and Jain proposed to use an MML criterion for determining the number of components in the mixture, and shown that their approach is equivalent to assuming Dirichlet priors for mixture proportions @cite_17 . In their method, a large number of components (typically 25-30) is fit to the training set, and these components are eliminated one by one. At each iteration, the EM algorithm is used to find a converged set of model parameters. The algorithm generates and stores all intermediate models, and selects one that optimizes the MML criterion.
{ "cite_N": [ "@cite_17" ], "mid": [ "2015245929" ], "abstract": [ "This paper proposes an unsupervised algorithm for learning a finite mixture model from multivariate data. The adjective \"unsupervised\" is justified by two properties of the algorithm: 1) it is capable of selecting the number of components and 2) unlike the standard expectation-maximization (EM) algorithm, it does not require careful initialization. The proposed method also avoids another drawback of EM for mixture fitting: the possibility of convergence toward a singular estimate at the boundary of the parameter space. The novelty of our approach is that we do not use a model selection criterion to choose one among a set of preestimated candidate models; instead, we seamlessly integrate estimation and model selection in a single algorithm. Our technique can be applied to any type of parametric mixture model for which it is possible to write an EM algorithm; in this paper, we illustrate it with experiments involving Gaussian mixtures. These experiments testify for the good performance of our approach." ] }
1507.02801
2243297270
A mixture of factor analyzers is a semi-parametric density estimator that generalizes the well-known mixtures of Gaussians model by allowing each Gaussian in the mixture to be represented in a dierent lower-dimensional manifold. This paper presents a robust and parsimonious model selection algorithm for training a mixture of factor analyzers, carrying out simultaneous clustering and locally linear, globally nonlinear dimensionality reduction. Permitting dierent number of factors per mixture component, the algorithm adapts the model complexity to the data complexity. We compare the proposed algorithm with related automatic model selection algorithms on a number of benchmarks. The results indicate the eectiveness of this fast
Using the parsimonious factor analysis representation described in , it is possible to explore many models that are between full-covariance and diagonal Gaussian mixtures in their number of parameters. The resulting mixture of factor analysers (MoFA) can be considered as a noise-robust version of the mixtures of probabilistic principal component analysers (PPCA) approach @cite_16 . Figure summarizes the relations between the mixture representations in this area.
{ "cite_N": [ "@cite_16" ], "mid": [ "2146610201" ], "abstract": [ "Principal component analysis (PCA) is one of the most popular techniques for processing, compressing, and visualizing data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Therefore, previous attempts to formulate mixture models for PCA have been ad hoc to some extent. In this article, PCA is formulated within a maximum likelihood framework, based on a specific form of gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analyzers, whose parameters can be determined using an expectationmaximization algorithm. We discuss the advantages of this model in the context of clustering, density modeling, and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition." ] }
1507.02801
2243297270
A mixture of factor analyzers is a semi-parametric density estimator that generalizes the well-known mixtures of Gaussians model by allowing each Gaussian in the mixture to be represented in a dierent lower-dimensional manifold. This paper presents a robust and parsimonious model selection algorithm for training a mixture of factor analyzers, carrying out simultaneous clustering and locally linear, globally nonlinear dimensionality reduction. Permitting dierent number of factors per mixture component, the algorithm adapts the model complexity to the data complexity. We compare the proposed algorithm with related automatic model selection algorithms on a number of benchmarks. The results indicate the eectiveness of this fast
Ghahramani and Beal @cite_11 have proposed a variational Bayes scheme (VBMoFA) for model selection in MoFA, which allows the local dimensionality of components and their total number to be automatically determined. In this study, we use VBMoFA as one of the benchmarks.
{ "cite_N": [ "@cite_11" ], "mid": [ "2151454335" ], "abstract": [ "We present an algorithm that infers the model structure of a mixture of factor analysers using an efficient and deterministic variational approximation to full Bayesian integration over model parameters. This procedure can automatically determine the optimal number of components and the local dimensionality of each component (i.e. the number of factors in each factor analyser). Alternatively it can be used to infer posterior distributions over number of components and dimensionalities. Since all parameters are integrated out the method is not prone to overfitting. Using a stochastic procedure for adding components it is possible to perform the variational optimisation incrementally and to avoid local maxima. Results show that the method works very well in practice and correctly infers the number and dimensionality of nontrivial synthetic examples. By importance sampling from the variational approximation we show how to obtain unbiased estimates of the true evidence, the exact predictive density, and the KL divergence between the variational posterior and the true posterior, not only in this model but for variational approximations in general." ] }
1507.02801
2243297270
A mixture of factor analyzers is a semi-parametric density estimator that generalizes the well-known mixtures of Gaussians model by allowing each Gaussian in the mixture to be represented in a dierent lower-dimensional manifold. This paper presents a robust and parsimonious model selection algorithm for training a mixture of factor analyzers, carrying out simultaneous clustering and locally linear, globally nonlinear dimensionality reduction. Permitting dierent number of factors per mixture component, the algorithm adapts the model complexity to the data complexity. We compare the proposed algorithm with related automatic model selection algorithms on a number of benchmarks. The results indicate the eectiveness of this fast
To alleviate the computational complexity of the variational approach, a greedy model selection algorithm was proposed by Salah and Alpaydın @cite_23 . This incremental approach (IMoFA) starts by fitting a single component - single factor model to the data and adds factors and components in each iteration using fast heuristic measures until a convergence criterion is met. The algorithm allows components to have as many factors as necessary, and uses a validation set to stop model adaptation, as well as to avoid over-fitting. This is the third algorithm we use to compare with the proposed approach, which we describe in detail next.
{ "cite_N": [ "@cite_23" ], "mid": [ "2009391063" ], "abstract": [ "A mixture of factor analyzer is a semiparametric density estimator that performs clustering and dimensionality reduction in each cluster (component) simultaneously. It performs nonlinear dimensionality reduction by modeling the density as a mixture of local linear models. The approach can be used for classification by modeling each class-conditional density using a mixture model and the complete data is then a mixture of mixtures. We propose an incremental mixture of factor analysis algorithm where the number of components (local models) in the mixture and the number of factors in each component (local dimensionality) are determined adaptively. Our results on different pattern classification tasks prove the utility of our approach and indicate that our algorithms find a good trade-off between model complexity and accuracy." ] }
1507.02761
1850955129
Future machine-to-machine (M2M) communications need to support a massive number of devices communicating with each other with little or no human intervention. Random access techniques were originally proposed to enable M2M multiple access, but suffer from severe congestion and access delay in an M2M system with a large number of devices. In this paper, we propose a novel multiple access scheme for M2M communications based on the capacity-approaching analog fountain code to efficiently minimize the access delay and satisfy the delay requirement for each device. This is achieved by allowing M2M devices to transmit at the same time on the same channel in an optimal probabilistic manner based on their individual delay requirements. Simulation results show that the proposed scheme achieves a near optimal rate performance and at the same time guarantees the delay requirements of the devices. We further propose a simple random access strategy and characterize the required overhead. Simulation results show that the proposed approach significantly outperforms the existing random access schemes currently used in long term evolution advanced (LTE-A) standard in terms of the access delay.
Additionally, different MTC devices have diverse service requirements and traffic patterns in M2M communications. Generally, we can divide the traffic types into four different categories. The first type is the alarm traffic, which is completely random and its probability is very low; however, it has a very strict delay requirement. The second traffic type can be modeled by a random Poisson distribution with the parameters depending on the application @cite_22 . The regular traffic, such as smart metering applications, is the third traffic type, and the last type is the streaming, like video surveillance applications. Current proposals for enabling M2M communications, did not consider the priorities among devices and different quality of service (QoS) requirements. These approaches are mostly inefficient for M2M communications as they are generally designed for a fixed payload size and thus, cannot support M2M applications with different service requirements.
{ "cite_N": [ "@cite_22" ], "mid": [ "2962894043" ], "abstract": [ "For wireless systems in which randomly arriving devices attempt to transmit a fixed payload to a central receiver, we develop a framework to characterize the system throughput as a function of arrival rate and per-device data rate. The frame- work considers both coordinated transmission (where devices are scheduled) and uncoordinated transmission (where devices com- municate on a random access channel and a provision is made for retransmissions). Our main contribution is a novel character- ization of the optimal throughput for the case of uncoordinated transmission and a strategy for achieving this throughput that relies on overlapping transmissions and joint decoding. Simula- tions for a noise-limited cellular network show that the optimal strategy provides a factor of four improvement in throughput compared with slotted ALOHA. We apply our framework to eval- uate more general system-level designs that account for overhead signaling. We demonstrate that, for small payload sizes relevant for machine-to-machine (M2M) communications (200 bits or less), a one-stage strategy, where identity and data are transmitted opti- mally over the random access channel, can support at least twice the number of devices compared with a conventional strategy, where identity is established over an initial random-access stage and data transmission is scheduled." ] }
1507.02761
1850955129
Future machine-to-machine (M2M) communications need to support a massive number of devices communicating with each other with little or no human intervention. Random access techniques were originally proposed to enable M2M multiple access, but suffer from severe congestion and access delay in an M2M system with a large number of devices. In this paper, we propose a novel multiple access scheme for M2M communications based on the capacity-approaching analog fountain code to efficiently minimize the access delay and satisfy the delay requirement for each device. This is achieved by allowing M2M devices to transmit at the same time on the same channel in an optimal probabilistic manner based on their individual delay requirements. Simulation results show that the proposed scheme achieves a near optimal rate performance and at the same time guarantees the delay requirements of the devices. We further propose a simple random access strategy and characterize the required overhead. Simulation results show that the proposed approach significantly outperforms the existing random access schemes currently used in long term evolution advanced (LTE-A) standard in terms of the access delay.
Recently, a systematic framework has been developed in @cite_22 @cite_30 to understand the fundamental limits of M2M communications in terms of power efficiency and throughput. However, they did not provide a systematic approach to develop an efficient communication protocol to approach these limits. Here, we consider a realistic model for M2M communications, which supports both regular and random traffics with different delay and service requirements. We develop a practical transmission scheme for M2M communications based on recently proposed analog fountain codes (AFCs) @cite_16 to enable massive number of devices communicating with a common base station (BS) while satisfying QoS requirements of all devices. We further show that the proposed scheme can closely approach the fundamental limits of M2M communications in terms of throughput and can satisfy the delay requirements of all devices at the same time. The main contributions of this paper are summarized next.
{ "cite_N": [ "@cite_30", "@cite_16", "@cite_22" ], "mid": [ "1994707489", "1981247656", "2962894043" ], "abstract": [ "The growing popularity of Machine-to-Machine (M2M) communications in cellular networks is driving the need to optimize networks based on the characteristics of M2M, which are significantly different from the requirements that current networks are designed to meet. First, M2M requires large number of short sessions as opposed to small number of long lived sessions required by the human generated traffic. Second, M2M constitutes a number of battery operated devices that are static in locations such as basements and tunnels, and need to transmit at elevated powers compared to the traditional devices. Third, replacing or recharging batteries of such devices may not be feasible. All these differences highlight the importance of a systematic framework to study the power and energy optimal system design in the regime of interest for M2M, which is the main focus of this paper. For a variety of coordinated and uncoordinated transmission strategies, we derive results for the optimal transmit power, energy per bit, and the maximum load supported by the base station, leading to the following design guidelines: (i) frequency division multiple access (FDMA), including equal bandwidth allocation, is sum-power optimal in the asymptotically low spectral efficiency regime, (ii) while FDMA is the best practical strategy overall, uncoordinated code division multiple access (CDMA) is almost as good when the base station is lightly loaded, (iii) the value of optimization within FDMA is not significant in the regime of interest for M2M.", "In this paper, we propose a capacity-approaching analog fountain code (AFC) for wireless channels. In AFC, the number of generated coded symbols is potentially limitless. In contrast to the conventional binary rateless codes, each coded symbol in AFC is a real-valued symbol, generated as a weighted sum of d randomly selected information bits, where d and the weight coefficients are randomly selected from predefined probability mass functions. The coded symbols are then directly transmitted through wireless channels. We analyze the error probability of AFC and design the weight set to minimize the error probability. Simulation results show that AFC achieves the capacity of the Gaussian channel in a wide range of signal to noise ratio (SNR).", "For wireless systems in which randomly arriving devices attempt to transmit a fixed payload to a central receiver, we develop a framework to characterize the system throughput as a function of arrival rate and per-device data rate. The frame- work considers both coordinated transmission (where devices are scheduled) and uncoordinated transmission (where devices com- municate on a random access channel and a provision is made for retransmissions). Our main contribution is a novel character- ization of the optimal throughput for the case of uncoordinated transmission and a strategy for achieving this throughput that relies on overlapping transmissions and joint decoding. Simula- tions for a noise-limited cellular network show that the optimal strategy provides a factor of four improvement in throughput compared with slotted ALOHA. We apply our framework to eval- uate more general system-level designs that account for overhead signaling. We demonstrate that, for small payload sizes relevant for machine-to-machine (M2M) communications (200 bits or less), a one-stage strategy, where identity and data are transmitted opti- mally over the random access channel, can support at least twice the number of devices compared with a conventional strategy, where identity is established over an initial random-access stage and data transmission is scheduled." ] }
1507.02379
830575572
Convolutional Neural Network (CNN) has been successful in image recognition tasks, and recent works shed lights on how CNN separates different classes with the learned inter-class knowledge through visualization. In this work, we instead visualize the intra-class knowledge inside CNN to better understand how an object class is represented in the fully-connected layers. To invert the intra-class knowledge into more interpretable images, we propose a non-parametric patch prior upon previous CNN visualization models. With it, we show how different "styles" of templates for an object class are organized by CNN in terms of location and content, and represented in a hierarchical and ensemble way. Moreover, such intra-class knowledge can be used in many interesting applications, e.g. style-based image retrieval and style-based object completion.
Below are some recent understandings of fully-connected layers. (1) Dropout Techinques. @cite_1 consider the dropout technique as an approximation of learning ensemble models and @cite_4 proves its equivalence to a regularization; (2) Binary Code. @cite_5 discovers that the biniary mask of the features from fc @math layers are good enough for classification. (3) Pool5. @math features contain object parts information with spatial and semantic. we can combine them by selecting sub-matrices in @math (4) Image Retrival from fc @math : fc @math is used as semantic space
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_4" ], "mid": [ "2160921898", "2618530766", "" ], "abstract": [ "In the last two years, convolutional neural networks (CNNs) have achieved an impressive suite of results on standard recognition datasets and tasks. CNN-based features seem poised to quickly replace engineered representations, such as SIFT and HOG. However, compared to SIFT and HOG, we understand much less about the nature of the features learned by large CNNs. In this paper, we experimentally probe several aspects of CNN feature learning in an attempt to help practitioners gain useful, evidence-backed intuitions about how to apply CNNs to computer vision problems.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "" ] }
1507.02379
830575572
Convolutional Neural Network (CNN) has been successful in image recognition tasks, and recent works shed lights on how CNN separates different classes with the learned inter-class knowledge through visualization. In this work, we instead visualize the intra-class knowledge inside CNN to better understand how an object class is represented in the fully-connected layers. To invert the intra-class knowledge into more interpretable images, we propose a non-parametric patch prior upon previous CNN visualization models. With it, we show how different "styles" of templates for an object class are organized by CNN in terms of location and content, and represented in a hierarchical and ensemble way. Moreover, such intra-class knowledge can be used in many interesting applications, e.g. style-based image retrieval and style-based object completion.
Unlike features in convolutional layers where we can recover most of the original images with parametric @cite_8 @cite_0 or non-parametric methods, features from fully-connected are hard to invert. As shown in @cite_0 , the location and style information of the object parts are lost. Another work @cite_3 inverts the class-specific feature from fc @math layer which is 0 except the target class. The output image from numerical optimization is a composite of various object templates. Both these works follow the same model framework (compared in Sec. ) which can be solved efficiently with gradient descend method.
{ "cite_N": [ "@cite_0", "@cite_3", "@cite_8" ], "mid": [ "1915485278", "2962851944", "1849277567" ], "abstract": [ "Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ] }
1507.02379
830575572
Convolutional Neural Network (CNN) has been successful in image recognition tasks, and recent works shed lights on how CNN separates different classes with the learned inter-class knowledge through visualization. In this work, we instead visualize the intra-class knowledge inside CNN to better understand how an object class is represented in the fully-connected layers. To invert the intra-class knowledge into more interpretable images, we propose a non-parametric patch prior upon previous CNN visualization models. With it, we show how different "styles" of templates for an object class are organized by CNN in terms of location and content, and represented in a hierarchical and ensemble way. Moreover, such intra-class knowledge can be used in many interesting applications, e.g. style-based image retrieval and style-based object completion.
Understanding image collections is a relatively unexplored task, although there is growing interest in this area. Several methods attempt to represent the continuous variaation in an image class using sub-spaces or manifolds. Unlike this work, we investigate discrete, name- able transformations, like crinkling, rather than working in a hard-to-interpret parameter space. Photo collections have also been mined for storylines as well as spatial and temporal trends, and systems have been proposed for more general knowledge discovery from big visual data. @cite_9 focuses on physical state transformations, and in addition to discovering states it also studies state pairs that define a transformation.
{ "cite_N": [ "@cite_9" ], "mid": [ "1948251820" ], "abstract": [ "Objects in visual scenes come in a rich variety of transformed states. A few classes of transformation have been heavily studied in computer vision: mostly simple, parametric changes in color and geometry. However, transformations in the physical world occur in many more flavors, and they come with semantic meaning: e.g., bending, folding, aging, etc. The transformations an object can undergo tell us about its physical and functional properties. In this paper, we introduce a dataset of objects, scenes, and materials, each of which is found in a variety of transformed states. Given a novel collection of images, we show how to explain the collection in terms of the states and transformations it depicts. Our system works by generalizing across object classes: states and transformations learned on one set of objects are used to interpret the image collection for an entirely new object class." ] }
1507.02380
778170084
This paper presents a structured ordinal measure method for video-based face recognition that simultaneously learns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space. The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization method is employed to handle the discrete and low-rank constraints, yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition rates using fewer features and samples.
In biometrics, binary feature representation methods often focus on directly computing local image patches by the filters to generate binary codes. Local binary patterns (LBP) and ordinal measures are two representative binary features. There are many variations of these two features @cite_12 @cite_6 . The definition and properties of OM in the context of biometrics can be found in @cite_23 .
{ "cite_N": [ "@cite_23", "@cite_6", "@cite_12" ], "mid": [ "2149999708", "1975056068", "2036070282" ], "abstract": [ "Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.", "Local feature descriptor is an important module for face recognition and those like Gabor and local binary patterns (LBP) have proven effective face descriptors. Traditionally, the form of such local descriptors is predefined in a handcrafted way. In this paper, we propose a method to learn a discriminant face descriptor (DFD) in a data-driven way. The idea is to learn the most discriminant local features that minimize the difference of the features between images of the same person and maximize that between images from different people. In particular, we propose to enhance the discriminative ability of face representation in three aspects. First, the discriminant image filters are learned. Second, the optimal neighborhood sampling strategy is soft determined. Third, the dominant patterns are statistically constructed. Discriminative learning is incorporated to extract effective and robust features. We further apply the proposed method to the heterogeneous (cross-modality) face recognition problem and learn DFD in a coupled way (coupled DFD or C-DFD) to reduce the gap between features of heterogeneous face images to improve the performance of this challenging problem. Extensive experiments on FERET, CAS-PEAL-R1, LFW, and HFB face databases validate the effectiveness of the proposed DFD learning on both homogeneous and heterogeneous face recognition problems. The DFD improves POEM and LQP by about 4.5 percent on LFW database and the C-DFD enhances the heterogeneous face recognition performance of LBP by over 25 percent.", "Great progress has been achieved in face recognition in the last three decades. However, it is still challenging to characterize the identity related features in face images. This paper proposes a novel facial feature extraction method named Gabor ordinal measures (GOM), which integrates the distinctiveness of Gabor features and the robustness of ordinal measures as a promising solution to jointly handle inter-person similarity and intra-person variations in face images. In the proposal, different kinds of ordinal measures are derived from magnitude, phase, real, and imaginary components of Gabor images, respectively, and then are jointly encoded as visual primitives in local regions. The statistical distributions of these visual primitives in face image blocks are concatenated into a feature vector and linear discriminant analysis is further used to obtain a compact and discriminative feature representation. Finally, a two-stage cascade learning method and a greedy block selection method are used to train a strong classifier for face recognition. Extensive experiments on publicly available face image databases, such as FERET, AR, and large scale FRGC v2.0, demonstrate state-of-the-art face recognition performance of GOM." ] }
1507.02380
778170084
This paper presents a structured ordinal measure method for video-based face recognition that simultaneously learns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space. The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization method is employed to handle the discrete and low-rank constraints, yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition rates using fewer features and samples.
Although OM's has been successfully applied to biometrics, there are still two open issues for OM. The first issue is the design of ordinal filters. The existing ordinal filters are often handcrafted. But handcrafted ordinal filters are too simple to represent complex human vision structures @cite_7 . In addition, to improve stability and accuracy, these filters often contain a large number of parameters based on distance, scale and location, resulting in a potential feature set of OM. This naturally leads to the second issue, i.e., how to select the optimal set of ordinal features. Although various feature selection methods @cite_23 @cite_15 @cite_34 have been employed to improve selection results, it is still difficult for a feature selection algorithm to select the optimal set from the over-complete set of OM.
{ "cite_N": [ "@cite_15", "@cite_34", "@cite_23", "@cite_7" ], "mid": [ "2016957240", "1970913757", "2149999708", "52553332" ], "abstract": [ "It is necessary to match heterogeneous iris images captured by different types of iris sensors with an increasing demand of interoperable identity management systems. The significant differences among multiple types of iris sensors such as optical lens and illumination wavelength determine the cross-sensor variations of iris texture patterns. Therefore it is a challenging problem to select the common feature set which is effective for all types of iris sensors. This paper proposes a novel optimization model of coupled feature selection for cross-sensor iris recognition. The objective function of our model includes two parts: the first part aims to minimize the misclassification errors; the second part is designed to achieve sparsity in coupled feature spaces based on l2,1-norm regularization. In the training stage, the proposed feature selection model can be formulated as a half-quadratic optimization problem, where an iterative algorithm is developed to obtain the solution. Experimental results on the Notre Dame Cross Sensor Iris Database and CASIA cross sensor iris database show that features selected by the proposed method perform better than those selected by conventional single-space feature selection methods such as Boosting and h regularization methods.", "Ordinal measures have been demonstrated as an effective feature representation model for iris and palmprint recognition. However, ordinal measures are a general concept of image analysis and numerous variants with different parameter settings, such as location, scale, orientation, and so on, can be derived to construct a huge feature space. This paper proposes a novel optimization formulation for ordinal feature selection with successful applications to both iris and palmprint recognition. The objective function of the proposed feature selection method has two parts, i.e., misclassification error of intra and interclass matching samples and weighted sparsity of ordinal feature descriptors. Therefore, the feature selection aims to achieve an accurate and sparse representation of ordinal measures. And, the optimization subjects to a number of linear inequality constraints, which require that all intra and interclass matching pairs are well separated with a large margin. Ordinal feature selection is formulated as a linear programming (LP) problem so that a solution can be efficiently obtained even on a large-scale feature pool and training database. Extensive experimental results demonstrate that the proposed LP formulation is advantageous over existing feature selection methods, such as mRMR, ReliefF, Boosting, and Lasso for biometric recognition, reporting state-of-the-art accuracy on CASIA and PolyU databases.", "Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.", "In this paper, we propose a novel appearance-based representation, called Structured Ordinal Feature (SOF). SOF is a binary string encoded by combining eight ordinal blocks in a circle symmetrically. SOF is invariant to linear transformations on images and is flexible enough to represent different local structures of different complexity. We further extend SOF to Multi-scale Structured Ordinal Feature (MSOF) by concatenating binary strings of multi-scale SOFs at a fix position. In this way, MSOF encodes not only microstructure but also macrostructure of image patterns, thus provides a more powerful image representation. We also present an efficient algorithm for computing MSOF using integral images. Based on MSOF, statistical analysis and learning are performed to select most effective features and construct classifiers. The proposed method is evaluated with face recognition experiments, in which we achieve a high rank-1 recognition rate of 98.24 on FERET database." ] }
1507.02380
778170084
This paper presents a structured ordinal measure method for video-based face recognition that simultaneously learns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space. The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization method is employed to handle the discrete and low-rank constraints, yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition rates using fewer features and samples.
Recently, data-driven binary feature methods, which learn local image filters from data, have drawn much attention. @cite_42 utilized unsupervised methods (random-projection trees and PCA trees) to learn binary representations. @cite_6 proposed a LBP-like discriminant face descriptor (DFD) by combining image filtering, pattern sampling and encoding. @cite_33 combined cascade PCA, binary code learning and block-wise histograms to learn a deep network. @cite_37 proposed a compact binary face descriptor (CBFD) to remove the redundancy information of face images. Although these methods indeed boost recognition performance on some challenging databases, their learned features are often high dimensional. For example, the dimensionality of histogram feature vectors of DFD and CBFD are 50,176 and 32,000 respectively. High dimensional and dense representations make these data-driven methods not applicable to VFR problems.
{ "cite_N": [ "@cite_37", "@cite_42", "@cite_33", "@cite_6" ], "mid": [ "2047186200", "1982048725", "1616262590", "1975056068" ], "abstract": [ "Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.", "We present a novel approach to address the representation issue and the matching issue in face recognition (verification). Firstly, our approach encodes the micro-structures of the face by a new learning-based encoding method. Unlike many previous manually designed encoding methods (e.g., LBP or SIFT), we use unsupervised learning techniques to learn an encoder from the training examples, which can automatically achieve very good tradeoff between discriminative power and invariance. Then we apply PCA to get a compact face descriptor. We find that a simple normalization mechanism after PCA can further improve the discriminative ability of the descriptor. The resulting face representation, learning-based (LE) descriptor, is compact, highly discriminative, and easy-to-extract. To handle the large pose variation in real-life scenarios, we propose a pose-adaptive matching method that uses pose-specific classifiers to deal with different pose combinations (e.g., frontal v.s. frontal, frontal v.s. left) of the matching face pair. Our approach is comparable with the state-of-the-art methods on the Labeled Face in Wild (LFW) benchmark (we achieved 84.45 recognition rate), while maintaining excellent compactness, simplicity, and generalization ability across different datasets.", "In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.", "Local feature descriptor is an important module for face recognition and those like Gabor and local binary patterns (LBP) have proven effective face descriptors. Traditionally, the form of such local descriptors is predefined in a handcrafted way. In this paper, we propose a method to learn a discriminant face descriptor (DFD) in a data-driven way. The idea is to learn the most discriminant local features that minimize the difference of the features between images of the same person and maximize that between images from different people. In particular, we propose to enhance the discriminative ability of face representation in three aspects. First, the discriminant image filters are learned. Second, the optimal neighborhood sampling strategy is soft determined. Third, the dominant patterns are statistically constructed. Discriminative learning is incorporated to extract effective and robust features. We further apply the proposed method to the heterogeneous (cross-modality) face recognition problem and learn DFD in a coupled way (coupled DFD or C-DFD) to reduce the gap between features of heterogeneous face images to improve the performance of this challenging problem. Extensive experiments on FERET, CAS-PEAL-R1, LFW, and HFB face databases validate the effectiveness of the proposed DFD learning on both homogeneous and heterogeneous face recognition problems. The DFD improves POEM and LQP by about 4.5 percent on LFW database and the C-DFD enhances the heterogeneous face recognition performance of LBP by over 25 percent." ] }
1507.02380
778170084
This paper presents a structured ordinal measure method for video-based face recognition that simultaneously learns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space. The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization method is employed to handle the discrete and low-rank constraints, yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition rates using fewer features and samples.
Learning binary codes ('hashing') has been a key step to facilitate large-scale image retrieval. In image retrieval, the terminology 'hashing' refers to learning compact binary codes with Hamming distance computation. Similarity-sensitive hashing or locality-sensitive hashing algorithms @cite_17 @cite_22 , graph-based hashing @cite_14 , semi-supervised learning @cite_21 , support vector machine @cite_9 @cite_29 , Riemannian manifold @cite_26 , decision trees @cite_1 and deep learning @cite_20 @cite_5 have been studied to map high-dimensional data into a low-dimensional Hamming space. The authors in @cite_14 @cite_29 argued that the degraded performance of hashing methods is due to the optimization procedures used to achieve discrete binary codes. Hence @cite_14 @cite_29 tried to enforce binary constraints to directly obtain discrete codes @cite_14 @cite_29 . A brief review of hashing methods for image search can be found in @cite_20 @cite_38 .
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_26", "@cite_22", "@cite_9", "@cite_21", "@cite_29", "@cite_1", "@cite_5", "@cite_20", "@cite_17" ], "mid": [ "1870428314", "2142881874", "1959016151", "2171790913", "1989902617", "2044195942", "1910300841", "", "2293824885", "1468978781", "" ], "abstract": [ "Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space.", "Hashing has emerged as a popular technique for fast nearest neighbor search in gigantic databases. In particular, learning based hashing has received considerable attention due to its appealing storage and search efficiency. However, the performance of most unsupervised learning based hashing methods deteriorates rapidly as the hash code length increases. We argue that the degraded performance is due to inferior optimization procedures used to achieve discrete binary codes. This paper presents a graph-based unsupervised hashing model to preserve the neighborhood structure of massive data in a discrete code space. We cast the graph hashing problem into a discrete optimization framework which directly learns the binary codes. A tractable alternating maximization algorithm is then proposed to explicitly deal with the discrete constraints, yielding high-quality codes to well capture the local neighborhoods. Extensive experiments performed on four large datasets with up to one million samples show that our discrete optimization based graph hashing method obtains superior search accuracy over state-of-the-art un-supervised hashing methods, especially for longer codes.", "Retrieving videos of a specific person given his her face image as query becomes more and more appealing for applications like smart movie fast-forwards and suspect searching. It also forms an interesting but challenging computer vision task, as the visual data to match, i.e., still image and video clip are usually represented quite differently. Typically, face image is represented as point (i.e., vector) in Euclidean space, while video clip is seemingly modeled as a point (e.g., covariance matrix) on some particular Riemannian manifold in the light of its recent promising success. It thus incurs a new hashing-based retrieval problem of matching two heterogeneous representations, respectively in Euclidean space and Riemannian manifold. This work makes the first attempt to embed the two heterogeneous spaces into a common discriminant Hamming space. Specifically, we propose Hashing across Euclidean space and Riemannian manifold (HER) by deriving a unified framework to firstly embed the two spaces into corresponding reproducing kernel Hilbert spaces, and then iteratively optimize the intra- and inter-space Hamming distances in a max-margin framework to learn the hash functions for the two spaces. Extensive experiments demonstrate the impressive superiority of our method over the state-of-the-art competitive hash learning methods.", "Fast retrieval methods are critical for large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sub-linear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several large-scale datasets, and show that it enables accurate and fast performance for example-based object classification, feature matching, and content-based retrieval.", "This paper presents a novel algorithm which uses compact hash bits to greatly improve the efficiency of non-linear kernel SVM in very large scale visual classification problems. Our key idea is to represent each sample with compact hash bits, over which an inner product is defined to serve as the surrogate of the original nonlinear kernels. Then the problem of solving the nonlinear SVM can be transformed into solving a linear SVM over the hash bits. The proposed Hash-SVM enjoys dramatic storage cost reduction owing to the compact binary representation, as well as a (sub-)linear training complexity via linear SVM. As a critical component of Hash-SVM, we propose a novel hashing scheme for arbitrary non-linear kernels via random subspace projection in reproducing kernel Hilbert space. Our comprehensive analysis reveals a well behaved theoretic bound of the deviation between the proposed hashing-based kernel approximation and the original kernel function. We also derive requirements on the hash bits for achieving a satisfactory accuracy level. Several experiments on large-scale visual classification benchmarks are conducted, including one with over 1 million images. The results show that Hash-SVM greatly reduces the computational complexity (more than ten times faster in many cases) while keeping comparable accuracies.", "Large scale image search has recently attracted considerable attention due to easy availability of huge amounts of data. Several hashing methods have been proposed to allow approximate but highly efficient search. Unsupervised hashing methods show good performance with metric distances but, in image search, semantic similarity is usually given in terms of labeled pairs of images. There exist supervised hashing methods that can handle such semantic similarity but they are prone to overfitting when labeled data is small or noisy. Moreover, these methods are usually very slow to train. In this work, we propose a semi-supervised hashing method that is formulated as minimizing empirical error on the labeled data while maximizing variance and independence of hash bits over the labeled and unlabeled data. The proposed method can handle both metric as well as semantic similarity. The experimental results on two large datasets (up to one million samples) demonstrate its superior performance over state-of-the-art supervised and unsupervised methods.", "Recently, learning based hashing techniques have attracted broad research interests because they can support efficient storage and retrieval for high-dimensional data such as images, videos, documents, etc. However, a major difficulty of learning to hash lies in handling the discrete constraints imposed on the pursued hash codes, which typically makes hash optimizations very challenging (NP-hard in general). In this work, we propose a new supervised hashing framework, where the learning objective is to generate the optimal binary hash codes for linear classification. By introducing an auxiliary variable, we reformulate the objective such that it can be solved substantially efficiently by employing a regularization algorithm. One of the key steps in this algorithm is to solve a regularization sub-problem associated with the NP-hard binary optimization. We show that the sub-problem admits an analytical solution via cyclic coordinate descent. As such, a high-quality discrete solution can eventually be obtained in an efficient computing manner, therefore enabling to tackle massive datasets. We evaluate the proposed approach, dubbed Supervised Discrete Hashing (SDH), on four large image datasets and demonstrate its superiority to the state-of-the-art hashing methods in large-scale image retrieval.", "", "Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector of handcrafted visual features. Such hand-crafted feature vectors do not necessarily preserve the accurate semantic similarities of images pairs, which may often degrade the performance of hashing function learning. In this paper, we propose a supervised hashing method for image retrieval, in which we automatically learn a good image representation tailored to hashing as well as a set of hash functions. The proposed method has two stages. In the first stage, given the pairwise similarity matrix S over training images, we propose a scalable coordinate descent method to decompose S into a product of HHT where H is a matrix with each of its rows being the approximate hash code associated to a training image. In the second stage, we propose to simultaneously learn a good feature representation for the input images as well as a set of hash functions, via a deep convolutional network tailored to the learned hash codes in H and optionally the discrete class labels of the images. Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods.", "Algorithms to rapidly search massive image or video collections are critical for many vision applications, including visual search, content-based retrieval, and non-parametric models for object recognition. Recent work shows that learned binary projections are a powerful way to index large collections according to their content. The basic idea is to formulate the projections so as to approximately preserve a given similarity function of interest. Having done so, one can then search the data efficiently using hash tables, or by exploring the Hamming ball volume around a novel query. Both enable sub-linear time retrieval with respect to the database size. Further, depending on the design of the projections, in some cases it is possible to bound the number of database examples that must be searched in order to achieve a given level of accuracy.", "" ] }
1507.02380
778170084
This paper presents a structured ordinal measure method for video-based face recognition that simultaneously learns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space. The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization method is employed to handle the discrete and low-rank constraints, yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition rates using fewer features and samples.
These hashing methods are often used for image search and retrieval but they may not achieve the highest accuracy for VFR problems. For example, the constraints in @cite_14 maximize the information from each binary code over all the samples in a training set. However, adjacent face samples in a video clip often have nearly the same appearance so that these samples can have similar binary codes. In addition, to the best of our knowledge, there is no existing hashing methods that address image-set problems @cite_40 .
{ "cite_N": [ "@cite_40", "@cite_14" ], "mid": [ "2157092301", "2142881874" ], "abstract": [ "Video-based Face Recognition (VFR) can be converted to the matching of two image sets containing face images captured from each video. For this purpose, we propose to bridge the two sets with a reference image set that is well-defined and pre-structured to a number of local models offline. In other words, given two image sets, as long as each of them is aligned to the reference set, they are mutually aligned and well structured. Therefore, the similarity between them can be computed by comparing only the corresponded local models rather than considering all the pairs. To align an image set with the reference set, we further formulate the problem as a quadratic programming. It integrates three constrains to guarantee robust alignment, including appearance matching cost term exploiting principal angles, geometric structure consistency using affine invariant reconstruction weights, smoothness constraint preserving local neighborhood relationship. Extensive experimental evaluations are performed on three databases: Honda, MoBo and YouTube. Compared with competing methods, our approach can consistently achieve better results.", "Hashing has emerged as a popular technique for fast nearest neighbor search in gigantic databases. In particular, learning based hashing has received considerable attention due to its appealing storage and search efficiency. However, the performance of most unsupervised learning based hashing methods deteriorates rapidly as the hash code length increases. We argue that the degraded performance is due to inferior optimization procedures used to achieve discrete binary codes. This paper presents a graph-based unsupervised hashing model to preserve the neighborhood structure of massive data in a discrete code space. We cast the graph hashing problem into a discrete optimization framework which directly learns the binary codes. A tractable alternating maximization algorithm is then proposed to explicitly deal with the discrete constraints, yielding high-quality codes to well capture the local neighborhoods. Extensive experiments performed on four large datasets with up to one million samples show that our discrete optimization based graph hashing method obtains superior search accuracy over state-of-the-art un-supervised hashing methods, especially for longer codes." ] }
1507.02703
826954055
While general object recognition is still far from being solved, this paper proposes a way for a robot to recognize every object at an almost human-level accuracy. Our key observation is that many robots will stay in a relatively closed environment (e.g. a house or an office). By constraining a robot to stay in a limited territory, we can ensure that the robot has seen most objects before and the speed of introducing a new object is slow. Furthermore, we can build a 3D map of the environment to reliably subtract the background to make recognition easier. We propose extremely robust algorithms to obtain a 3D map and enable humans to collectively annotate objects. During testing time, our algorithm can recognize all objects very reliably, and query humans from crowd sourcing platform if confidence is low or new objects are identified. This paper explains design decisions in building such a system, and constructs a benchmark for extensive evaluation. Experiments suggest that making robot vision appear to be working from an end user's perspective is a reachable goal today, as long as the robot stays in a closed environment. By formulating this task, we hope to lay the foundation of a new direction in vision for robotics. Code and data will be available upon acceptance.
There is a vast literature on object recognition from 2D, 3D, RGB-D, video in computer vision and robotics. For category-level recognition, the state-of-the-art object detectors are @cite_0 @cite_7 @cite_11 , and @cite_25 @cite_36 for RGB-D images. @cite_35 @cite_17 @cite_37 @cite_30 @cite_15 are popular semantic segmentation systems. However, category-level recognition is still far from human performance. For instance-level recognition, well-known approaches include @cite_6 @cite_24 . For RGB-D images, the state-of-the-arts @cite_9 @cite_26 focus on recognizing table-top objects on a clean artificial background, with object models carefully pre-scanned from all view angles @cite_8 @cite_12 (Figure ). Our approach is built on top of these success, extending them to realistic scenes.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_37", "@cite_26", "@cite_7", "@cite_8", "@cite_36", "@cite_9", "@cite_17", "@cite_6", "@cite_0", "@cite_24", "@cite_15", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "1573897183", "2066813062", "2067912884", "2054329862", "2168356304", "2156222070", "1565402342", "2084635560", "125693051", "2059412355", "2102605133", "2141362318", "2075274454", "116751493", "2005756025", "1989684337" ], "abstract": [ "Recently introduced RGB-D cameras are capable of providing high quality synchronized videos of both color and depth. With its advanced sensing capabilities, this technology represents an opportunity to dramatically increase the capabilities of object recognition. It also raises the problem of developing expressive features for the color and depth channels of these sensors. In this paper we introduce hierarchical matching pursuit (HMP) for RGB-D data. HMP uses sparse coding to learn hierarchical feature representations from raw RGB-D data in an unsupervised way. Extensive experiments on various datasets indicate that the features learned with our approach enable superior object recognition results using linear support vector machines.", "Scene labeling research has mostly focused on outdoor scenes, leaving the harder case of indoor scenes poorly understood. Microsoft Kinect dramatically changed the landscape, showing great potentials for RGB-D perception (color+depth). Our main objective is to empirically understand the promises and challenges of scene labeling with RGB-D. We use the NYU Depth Dataset as collected and analyzed by Silberman and Fergus [30]. For RGB-D features, we adapt the framework of kernel descriptors that converts local similarities (kernels) to patch descriptors. For contextual modeling, we combine two lines of approaches, one using a superpixel MRF, and the other using a segmentation tree. We find that (1) kernel descriptors are very effective in capturing appearance (RGB) and shape (D) similarities; (2) both superpixel MRF and segmentation tree are useful in modeling context; and (3) the key to labeling accuracy is the ability to efficiently train and test with large-scale data. We improve labeling accuracy on the NYU Dataset from 56.6 to 76.1 . We also apply our approach to image-only scene labeling and improve the accuracy on the Stanford Background Dataset from 79.4 to 82.9 .", "We address the problems of contour detection, bottom-up grouping and semantic segmentation using RGB-D data. We focus on the challenging setting of cluttered indoor scenes, and evaluate our approach on the recently introduced NYU-Depth V2 (NYUD2) dataset [27]. We propose algorithms for object boundary detection and hierarchical segmentation that generalize the gPb-ucm approach of [2] by making effective use of depth information. We show that our system can label each contour with its type (depth, normal or albedo). We also propose a generic method for long-range amodal completion of surfaces and show its effectiveness in grouping. We then turn to the problem of semantic segmentation and propose a simple approach that classifies super pixels into the 40 dominant object categories in NYUD2. We use both generic and class-specific features to encode the appearance and geometry of objects. We also show how our approach can be used for scene classification, and how this contextual information in turn improves object recognition. In all of these tasks, we report significant improvements over the state-of-the-art.", "A new system for object detection in cluttered RGB-D images is presented. Our main contribution is a new method called Bingham Procrustean Alignment (BPA) to align models with the scene. BPA uses point correspondences between oriented features to derive a probability distribution over possible model poses. The orientation component of this distribution, conditioned on the position, is shown to be a Bingham distribution. This result also applies to the classic problem of least-squares alignment of point sets, when point features are orientation-less, and gives a principled, probabilistic way to measure pose uncertainty in the rigid alignment problem. Our detection system leverages BPA to achieve more reliable object detections in clutter.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "Over the last decade, the availability of public image repositories and recognition benchmarks has enabled rapid progress in visual object category and instance detection. Today we are witnessing the birth of a new generation of sensing technologies capable of providing high quality synchronized videos of both color and depth, the RGB-D (Kinect-style) camera. With its advanced sensing capabilities and the potential for mass adoption, this technology represents an opportunity to dramatically increase robotic object recognition, manipulation, navigation, and interaction capabilities. In this paper, we introduce a large-scale, hierarchical multi-view object dataset collected using an RGB-D camera. The dataset contains 300 objects organized into 51 categories and has been made publicly available to the research community so as to enable rapid progress based on this promising technology. This paper describes the dataset collection procedure and introduces techniques for RGB-D based object recognition and detection, demonstrating that combining color and depth information substantially improves quality of results.", "In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an average precision of 37.3 , which is a 56 relative improvement over existing methods. We then focus on the task of instance segmentation where we label pixels belonging to object instances found by our detector. For this task, we propose a decision forest approach that classifies pixels in the detection window as foreground or background using a family of unary and binary tests that query shape and geocentric pose features. Finally, we use the output from our object detectors in an existing superpixel classification framework for semantic scene segmentation and achieve a 24 relative improvement over current state-of-the-art for the object categories that we study. We believe advances such as those represented in this paper will facilitate the use of perception in fields like robotics.", "We present an object recognition system which leverages the additional sensing and calibration information available in a robotics setting together with large amounts of training data to build high fidelity object models for a dataset of textured household objects. We then demonstrate how these models can be used for highly accurate detection and pose estimation in an end-to-end robotic perception system incorporating simultaneous segmentation, object classification, and pose fitting. The system can handle occlusions, illumination changes, multiple objects, and multiple instances of the same object. The system placed first in the ICRA 2011 Solutions in Perception instance recognition challenge. We believe the presented paradigm of building rich 3D models at training time and including depth information at test time is a promising direction for practical robotic perception systems.", "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.", "This article introduces a novel representation for three-dimensional (3D) objects in terms of local affine-invariant descriptors of their images and the spatial relationships between the corresponding surface patches. Geometric constraints associated with different views of the same patches under affine projection are combined with a normalized representation of their appearance to guide matching and reconstruction, allowing the acquisition of true 3D affine and Euclidean models from multiple unregistered images, as well as their recognition in photographs taken from arbitrary viewpoints. The proposed approach does not require a separate segmentation stage, and it is applicable to highly cluttered scenes. Modeling and recognition results are presented.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora.", "We propose a view-based approach for labeling objects in 3D scenes reconstructed from RGB-D (color+depth) videos. We utilize sliding window detectors trained from object views to assign class probabilities to pixels in every RGB-D frame. These probabilities are projected into the reconstructed 3D scene and integrated using a voxel representation. We perform efficient inference on a Markov Random Field over the voxels, combining cues from view-based detection and 3D shape, to label the scene. Our detection-based approach produces accurate scene labeling on the RGB-D Scenes Dataset and improves the robustness of object detection.", "The depth information of RGB-D sensors has greatly simplified some common challenges in computer vision and enabled breakthroughs for several tasks. In this paper, we propose to use depth maps for object detection and design a 3D detector to overcome the major difficulties for recognition, namely the variations of texture, illumination, shape, viewpoint, clutter, occlusion, self-occlusion and sensor noises. We take a collection of 3D CAD models and render each CAD model from hundreds of viewpoints to obtain synthetic depth maps. For each depth rendering, we extract features from the 3D point cloud and train an Exemplar-SVM classifier. During testing and hard-negative mining, we slide a 3D detection window in 3D space. Experiment results show that our 3D detector significantly outperforms the state-of-the-art algorithms for both RGB and RGB-D images, and achieves about ×1.7 improvement on average precision compared to DPM and R-CNN. All source code and data are available online.", "The state of the art in computer vision has rapidly advanced over the past decade largely aided by shared image datasets. However, most of these datasets tend to consist of assorted collections of images from the web that do not include 3D information or pose information. Furthermore, they target the problem of object category recognition—whereas solving the problem of object instance recognition might be sufficient for many robotic tasks. To address these issues, we present a highquality, large-scale dataset of 3D object instances, with accurate calibration information for every image. We anticipate that “solving” this dataset will effectively remove many perceptionrelated problems for mobile, sensing-based robots. The contributions of this work consist of: (1) BigBIRD, a dataset of 100 objects (and growing), composed of, for each object, 600 3D point clouds and 600 high-resolution (12 MP) images spanning all views, (2) a method for jointly calibrating a multi-camera system, (3) details of our data collection system, which collects all required data for a single object in under 6 minutes with minimal human effort, and (4) multiple software components (made available in open source), used to automate multi-sensor calibration and the data collection process. All code and data are available at http: rll.eecs.berkeley.edu bigbird.", "This paper proposes a conceptually simple but surprisingly powerful method which combines the effectiveness of a discriminative object detector with the explicit correspondence offered by a nearest-neighbor approach. The method is based on training a separate linear SVM classifier for every exemplar in the training set. Each of these Exemplar-SVMs is thus defined by a single positive instance and millions of negatives. While each detector is quite specific to its exemplar, we empirically observe that an ensemble of such Exemplar-SVMs offers surprisingly good generalization. Our performance on the PASCAL VOC detection task is on par with the much more complex latent part-based model of , at only a modest computational cost increase. But the central benefit of our approach is that it creates an explicit association between each detection and a single training exemplar. Because most detections show good alignment to their associated exemplar, it is possible to transfer any available exemplar meta-data (segmentation, geometric structure, 3D model, etc.) directly onto the detections, which can then be used as part of overall scene understanding." ] }
1507.02703
826954055
While general object recognition is still far from being solved, this paper proposes a way for a robot to recognize every object at an almost human-level accuracy. Our key observation is that many robots will stay in a relatively closed environment (e.g. a house or an office). By constraining a robot to stay in a limited territory, we can ensure that the robot has seen most objects before and the speed of introducing a new object is slow. Furthermore, we can build a 3D map of the environment to reliably subtract the background to make recognition easier. We propose extremely robust algorithms to obtain a 3D map and enable humans to collectively annotate objects. During testing time, our algorithm can recognize all objects very reliably, and query humans from crowd sourcing platform if confidence is low or new objects are identified. This paper explains design decisions in building such a system, and constructs a benchmark for extensive evaluation. Experiments suggest that making robot vision appear to be working from an end user's perspective is a reachable goal today, as long as the robot stays in a closed environment. By formulating this task, we hope to lay the foundation of a new direction in vision for robotics. Code and data will be available upon acceptance.
Our 3D mapping is related to RGB-D reconstruction @cite_33 @cite_10 @cite_18 @cite_13 @cite_14 @cite_16 and localization @cite_4 @cite_1 . Our algorithm is closest to the RGB-D Structure from Motion (SfM) from @cite_16 . We extend their algorithm to utilize four RGB-D sensors and encode the camera height as a hard constraint. We design a special trajectory to control the robot to move in a way with significant redundancy to favor loop closing to ensure good 3D reconstructions. Toward semantics, there are several seminal works on combining 3D mapping and object recognition on RGB-D scans @cite_27 @cite_39 @cite_5 @cite_21 @cite_38 . There are also several seminal works in image domain as well @cite_34 @cite_31 @cite_28 @cite_19 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_14", "@cite_4", "@cite_33", "@cite_28", "@cite_10", "@cite_21", "@cite_1", "@cite_34", "@cite_39", "@cite_19", "@cite_27", "@cite_5", "@cite_31", "@cite_16", "@cite_13" ], "mid": [ "2060775765", "1716229439", "1957167950", "1989476314", "1987648924", "2092399515", "1187244281", "2738695980", "2081605477", "", "2097696373", "2145567954", "2082761562", "1020063629", "2060772243", "1985238052", "2071906076" ], "abstract": [ "We build on recent fast and accurate 3-D reconstruction techniques to segment objects during scene reconstruction. We take object outline information from change detection to build 3-D models of rigid objects and represent the scene as static and dynamic components. Object models are updated online during mapping, and can integrate segmentation information from sources other than change detection.", "In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. This is achieved through (i) altering the original algorithm such that the region of space being mapped by the KinectFusion algorithm can vary dynamically, (ii) extracting a dense point cloud from the regions that leave the KinectFusion volume due to this variation, and, (iii) incrementally adding the resulting points to a triangular mesh representation of the environment. The system is implemented as a set of hierarchical multi-threaded components which are capable of operating in real-time. The architecture facilitates the creation and integration of new modules with minimal impact on the performance on the dense volume tracking and surface reconstruction modules. We provide experimental results demonstrating the system’s ability to map areas considerably beyond the scale of the original KinectFusion algorithm including a two story apartment and an extended sequence taken from a car at night. In order to overcome failure of the iterative closest point (ICP) based odometry in areas of low geometric features we have evaluated the Fast Odometry from Vision (FOVIS) system as an alternative. We provide a comparison between the two approaches where we show a trade off between the reduced drift of the visual odometry approach and the higher local mesh quality of the ICP-based approach. Finally we present ongoing work on incorporating full simultaneous localisation and mapping (SLAM) pose-graph optimisation.", "We present an approach to indoor scene reconstruction from RGB-D video. The key idea is to combine geometric registration of scene fragments with robust global optimization based on line processes. Geometric registration is error-prone due to sensor noise, which leads to aliasing of geometric detail and inability to disambiguate different surfaces in the scene. The presented optimization approach disables erroneous geometric alignments even when they significantly outnumber correct ones. Experimental results demonstrate that the presented approach substantially increases the accuracy of reconstructed scene models.", "We address the problem of inferring the pose of an RGB-D camera relative to a known 3D scene, given only a single acquired image. Our approach employs a regression forest that is capable of inferring an estimate of each pixel's correspondence to 3D points in the scene's world coordinate frame. The forest uses only simple depth and RGB pixel comparison features, and does not require the computation of feature descriptors. The forest is trained to be capable of predicting correspondences at any pixel, so no interest point detectors are required. The camera pose is inferred using a robust optimization scheme. This starts with an initial set of hypothesized camera poses, constructed by applying the forest at a small fraction of image pixels. Preemptive RANSAC then iterates sampling more pixels at which to evaluate the forest, counting inliers, and refining the hypothesized poses. We evaluate on several varied scenes captured with an RGB-D camera and observe that the proposed technique achieves highly accurate relocalization and substantially out-performs two state of the art baselines.", "We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.", "Visual scene understanding is a difficult problem interleaving object detection, geometric reasoning and scene classification. We present a hierarchical scene model for learning and reasoning about complex indoor scenes which is computationally tractable, can be learned from a reasonable amount of training data, and avoids oversimplification. At the core of this approach is the 3D Geometric Phrase Model which captures the semantic and geometric relationships between objects which frequently co-occur in the same 3D spatial configuration. Experiments show that this model effectively explains scene semantics, geometry and object groupings from a single image, while also improving individual object detections.", "RGB-D cameras are novel sensing systems that capture RGB images along with per-pixel depth information. In this paper we investigate how such cameras can be used in the context of robotics, specifically for building dense 3D maps of indoor environments. Such maps have applications in robot navigation, manipulation, semantic mapping, and telepresence. We present RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment. Visual and depth information are also combined for view-based loop closure detection, followed by pose optimization to achieve globally consistent maps.We evaluate RGB-D Mapping on two large indoor environments, and show that it effectively combines the visual and shape information available from RGB-D cameras.", "In this paper we present work in progress on the development of a low-cost autonomous robotic platform that integrates multiple state-of-the-art techniques in RGB-D perception to form a system capable of completing a real-world task in an entirely autonomous fashion. The task we set out to complete is determining the location of a preselected object within the physical world. This experiment requires a robotic framework with a number of capabilities including autonomous exploration, dense real-time localisation and mapping, object detection, path planning and motion control.", "We address the problem of estimating the pose of a cam- era relative to a known 3D scene from a single RGB-D frame. We formulate this problem as inversion of the generative rendering procedure, i.e., we want to find the camera pose corresponding to a rendering of the 3D scene model that is most similar with the observed input. This is a non-convex optimization problem with many local optima. We propose a hybrid discriminative-generative learning architecture that consists of: (i) a set of M predictors which generate M camera pose hypotheses, and (ii) a 'selector' or 'aggregator' that infers the best pose from the multiple pose hypotheses based on a similarity function. We are interested in predictors that not only produce good hypotheses but also hypotheses that are different from each other. Thus, we propose and study methods for learning 'marginally relevant' predictors, and compare their performance when used with different selection procedures. We evaluate our method on a recently released 3D reconstruction dataset with challenging camera poses, and scene variability. Experiments show that our method learns to make multiple predictions that are marginally relevant and can effectively select an accurate prediction. Furthermore, our method outperforms the state-of-the-art discriminative approach for camera relocalization.", "", "We present the major advantages of a new 'object oriented' 3D SLAM paradigm, which takes full advantage in the loop of prior knowledge that many scenes consist of repeated, domain-specific objects and structures. As a hand-held depth camera browses a cluttered scene, real-time 3D object recognition and tracking provides 6DoF camera-object constraints which feed into an explicit graph of objects, continually refined by efficient pose-graph optimisation. This offers the descriptive and predictive power of SLAM systems which perform dense surface reconstruction, but with a huge representation compression. The object graph enables predictions for accurate ICP-based camera to model tracking at each live frame, and efficient active search for new objects in currently undescribed image regions. We demonstrate real-time incremental SLAM in large, cluttered environments, including loop closure, relocalisation and the detection of moved objects, and of course the generation of an object level scene description with the potential to enable interaction.", "There has been a recent push in extraction of 3D spatial layout of scenes. However, none of these approaches model the 3D interaction between objects and the spatial layout. In this paper, we argue for a parametric representation of objects in 3D, which allows us to incorporate volumetric constraints of the physical world. We show that augmenting current structured prediction techniques with volumetric reasoning significantly improves the performance of the state-of-the-art.", "In this paper, we present a system for automatically learning segmentations of objects given changes in dense RGB-D maps over the lifetime of a robot. Using recent advances in RGB-D mapping to construct multiple dense maps, we detect changes between mapped regions from multiple traverses by performing a 3-D difference of the scenes. Our method takes advantage of the free space seen in each map to account for variability in how the maps were created. The resulting changes from the 3-D difference are our discovered objects, which are then used to train multiple segmentation algorithms in the original map. The final objects can then be matched in other maps given their corresponding features and learned segmentation method. If the same object is discovered multiple times in different contexts, the features and segmentation method are refined, incorporating all instances to better learn objects over time. We verify our approach with multiple objects in numerous and varying maps.", "In this paper we propose an extension to the KinectFusion approach which enables both SLAM-graph optimization, usually required on large looping routes, as well as discovery of semantic information in the form of object detection and localization. Global optimization is achieved by incorporating the notion of keyframe into a KinectFusion-style approach, thus providing the system with the ability to explore large environments and maintain a globally consistent map. Moreover, we integrate into the system our recent object detection approach based on a new Semantic Bundle Adjustment paradigm, thereby achieving joint detection, tracking and mapping. Although our current implementation is not optimized for real-time operation, the principles and ideas set forth in this paper can be considered a relevant contribution towards a Semantic KinectFusion system.", "Conventional rigid structure from motion (SFM) addresses the problem of recovering the camera parameters (motion) and the 3D locations (structure) of scene points, given observed 2D image feature points. In this paper, we propose a new formulation called Semantic Structure From Motion (SSFM). In addition to the geometrical constraints provided by SFM, SSFM takes advantage of both semantic and geometrical properties associated with objects in the scene (Fig. 1). These properties allow us to recover not only the structure and motion but also the 3D locations, poses, and categories of objects in the scene. We cast this problem as a max-likelihood problem where geometry (cameras, points, objects) and semantic information (object classes) are simultaneously estimated. The key intuition is that, in addition to image features, the measurements of objects across views provide additional geometrical constraints that relate cameras and scene parameters. These constraints make the geometry estimation process more robust and, in turn, make object detection more accurate. Our framework has the unique ability to: i) estimate camera poses only from object detections, ii) enhance camera pose estimation, compared to feature-point-based SFM algorithms, iii) improve object detections given multiple un-calibrated images, compared to independently detecting objects in single images. Extensive quantitative results on three datasets–LiDAR cars, street-view pedestrians, and Kinect office desktop–verify our theoretical claims.", "Existing scene understanding datasets contain only a limited set of views of a place, and they lack representations of complete 3D spaces. In this paper, we introduce SUN3D, a large-scale RGB-D video database with camera pose and object labels, capturing the full 3D extent of many places. The tasks that go into constructing such a dataset are difficult in isolation -- hand-labeling videos is painstaking, and structure from motion (SfM) is unreliable for large spaces. But if we combine them together, we make the dataset construction task much easier. First, we introduce an intuitive labeling tool that uses a partial reconstruction to propagate labels from one frame to another. Then we use the object labels to fix errors in the reconstruction. For this, we introduce a generalization of bundle adjustment that incorporates object-to-object correspondences. This algorithm works by constraining points for the same object from different frames to lie inside a fixed-size bounding box, parameterized by its rotation and translation. The SUN3D database, the source code for the generalized bundle adjustment, and the web-based 3D annotation tool are all available at http: sun3d.cs.princeton.edu.", "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality." ] }
1507.02444
1751436528
This paper investigates the information-theoretic limits of energy-harvesting (EH) channels in the finite blocklength regime. The EH process is characterized by a sequence of i.i.d. random variables with finite variances. We use the save-and-transmit strategy proposed by Ozel and Ulukus (2012) together with Shannon’s non-asymptotic achievability bound to obtain lower bounds on the achievable rates for both additive white Gaussian noise channels and discrete memoryless channels under EH constraints. The first-order terms of the lower bounds of the achievable rates are equal to @math and the second-order (backoff from capacity) terms are proportional to @math , where @math denotes the blocklength and @math denotes the capacity of the EH channel, which is the same as the capacity without the EH constraints. The constant of proportionality of the backoff term is found and qualitative interpretations are provided.
Mao and Hassibi @cite_22 investigated the capacity of an energy-harvesting transmitter with finite battery over a discrete memoryless channel (DMC). It was shown that the capacity can be described using the Verd 'u-Han general framework @cite_10 . If the transmitted symbol only depends on the currently available energy, the system reduced to a finite-state channel. However, it was analytically intractable to explicitly characterize the capacity, and even the lower bound of the capacity can only be evaluated numerically. A special scenario of the same problem, namely the capacity of noiseless binary channel with binary energy arrivals and unit-capacity battery, was discussed in @cite_5 . The channel was shown to be equivalent to an additive geometric-noise timing channel with causal information of the noise available at the transmitter. Achievable strategies were proposed along with upper bounds, which were then improved in @cite_4 . Ozel @cite_24 considered a noiseless binary energy harvesting channel with on-off fading.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_24", "@cite_5", "@cite_10" ], "mid": [ "2029417770", "2020968657", "1655921090", "1987865487", "2020347709" ], "abstract": [ "We consider a binary energy harvesting channel (BEHC) where the encoder has unit energy storage capacity. We first show that an encoding scheme based on block indexing is asymptotically optimal for small energy harvesting rates. We then present a novel upper bounding technique, which upper bounds the rate by lower-bounding the rate of information leakage to the receiver regarding the energy harvesting process. Finally, we propose a timing based hybrid encoding scheme that achieves rates within 0:03 bits channel use of the upper bound; hence determining the capacity to within 0:03 bits channel use. I. INTRODUCTION", "We consider the problem of determining the capacity of an energy-harvesting transmitter with finite battery communicating over a discrete memoryless channel. When the battery is unlimited, or zero, the capacity has been determined, but it remains unknown for a finite non-zero battery. In this paper we assume that the harvested energy at each time, the total battery storage, and the transmitter signal energy at each time can be quantized to the same unit (i.e., the same energy interval). Under this assumption, we show that the capacity can be described using the Verdu-Han general framework. If we further assume that the transmitted symbol at each time depends only on the energy currently available, and not on the entire past history of energy harvests and symbols transmitted, then we show that the system reduces to a finite state channel (FSC) with the required ergodic and Markov properties so that lower bounds on the capacity can be readily numerically computed. We conjecture that our numerical bounds are tight. Our numerical results indicate that even the minimal possible battery storage can reap a significant fraction of the infinite battery capacity.", "A noiseless binary energy harvesting channel with on-off fading is considered. When causal fading state information is available at the transmitter only, an equivalent timing channel with additive geometric noise and noise information known at the transmitter is obtained. In this channel, the transmitter's strategy is a stopping rule with respect to the channel fade levels given the message and the additive noise. Next, capacity when energy arrival information is available at the receiver and capacity when both energy arrival and fading information are available at the receiver are obtained. Additionally, several achievable schemes are proposed and evaluated.", "We consider the capacity of an energy harvesting communication channel with a finite-sized battery. As an abstraction of this problem, we consider a system where energy arrives at the encoder in multiples of a fixed quantity, and the physical layer is modeled accordingly as a finite discrete alphabet channel based on this fixed quantity. Further, for tractability, we consider the case of binary energy arrivals into a unit-capacity battery over a noiseless binary channel. Viewing the available energy as state, this is a state-dependent channel with causal state information available only at the transmitter. Further, the state is correlated over time and the channel inputs modify the future states. We show that this channel is equivalent to an additive geometric-noise timing channel with causal information of the noise available at the transmitter. We provide a single-letter capacity expression involving an auxiliary random variable, and evaluate this expression with certain auxiliary random variable selection, which resembles noise concentration and lattice-type coding in the timing channel. We evaluate the achievable rates by the proposed auxiliary selection and extend our results to noiseless ternary channels.", "A formula for the capacity of arbitrary single-user channels without feedback (not necessarily information stable, stationary, etc.) is proved. Capacity is shown to equal the supremum, over all input processes, of the input-output inf-information rate defined as the liminf in probability of the normalized information density. The key to this result is a new converse approach based on a simple new lower bound on the error probability of m-ary hypothesis tests among equiprobable hypotheses. A necessary and sufficient condition for the validity of the strong converse is given, as well as general expressions for spl epsiv -capacity. >" ] }
1507.02444
1751436528
This paper investigates the information-theoretic limits of energy-harvesting (EH) channels in the finite blocklength regime. The EH process is characterized by a sequence of i.i.d. random variables with finite variances. We use the save-and-transmit strategy proposed by Ozel and Ulukus (2012) together with Shannon’s non-asymptotic achievability bound to obtain lower bounds on the achievable rates for both additive white Gaussian noise channels and discrete memoryless channels under EH constraints. The first-order terms of the lower bounds of the achievable rates are equal to @math and the second-order (backoff from capacity) terms are proportional to @math , where @math denotes the blocklength and @math denotes the capacity of the EH channel, which is the same as the capacity without the EH constraints. The constant of proportionality of the backoff term is found and qualitative interpretations are provided.
As mentioned above, finite blocklength analysis for EH channels was only considered previously by Yang @cite_6 . However, the channel considered therein is noiseless and has binary inputs and binary outputs. Our framework is considerably more general and we consider noisy discrete as well as Gaussian channels from a finite blocklength perspective. The study of finite blocklength fundamental limits in Shannon-theoretic problems was undertaken by Polyanskiy, Poor and Verd 'u @cite_18 . Such a study is useful as it provides guidelines regarding the required backoff from the asymptotic fundamental limit (capacity) when one operates at finite blocklengths. For a survey, please see @cite_2 .
{ "cite_N": [ "@cite_18", "@cite_6", "@cite_2" ], "mid": [ "2106864314", "1997773358", "2051707898" ], "abstract": [ "This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability ? isclosely approximated by C - ?(V n) Q-1(?) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion , and Q is the complementary Gaussian cumulative distribution function.", "", "This monograph presents a unified treatment of single- and multi-user problems in Shannon's information theory where we depart from the requirement that the error probability decays asymptotically in the blocklength. Instead, the error probabilities for various problems are bounded above by a non-vanishing constant and the spotlight is shone on achievable coding rates as functions of the growing blocklengths. This represents the study of asymptotic estimates with non-vanishing error probabilities.In Part I, after reviewing the fundamentals of information theory, we discuss Strassen's seminal result for binary hypothesis testing where the type-I error probability is non-vanishing and the rate of decay of the type-II error probability with growing number of independent observations is characterized. In Part II, we use this basic hypothesis testing result to develop second- and sometimes, even third-order asymptotic expansions for point-to-point communication. Finally in Part III, we consider network information theory problems for which the second order asymptotics are known. These problems include some classes of channels with random state, the multiple-encoder distributed lossless source coding (Slepian-Wolf) problem and special cases of the Gaussian interference and multiple-access channels. Finally, we discuss avenues for further research." ] }
1507.02293
1724705265
Information diffusion in online social networks is affected by the underlying network topology, but it also has the power to change it. Online users are constantly creating new links when exposed to new information sources, and in turn these links are alternating the way information spreads. However, these two highly intertwined stochastic processes, information diffusion and network evolution, have been predominantly studied separately, ignoring their co-evolutionary dynamics. We propose a temporal point process model, COEVOLVE, for such joint dynamics, allowing the intensity of one process to be modulated by that of the other. This model allows us to efficiently simulate interleaved diffusion and network events, and generate traces obeying common diffusion and network patterns observed in real-world networks. Furthermore, we also develop a convex optimization framework to learn the parameters of the model from historical diffusion and network evolution traces. We experimented with both synthetic data and data gathered from Twitter, and show that our model provides a good fit to the data as well as more accurate predictions than alternatives.
However, there are fundamental differences between the above-mentioned studies and our work. First, they only cha -rac -te -rize the effect that information diffusion has on the network dynamics, but not the bidirectional influence. In contrast, our probabilistic generative model takes into account the bidirectional influence between information diffusion and network dynamics. Second, previous studies are mostly empirical and only make binary predictions on link creation events. For example, the work of @cite_80 @cite_50 predict whether a new link will be created based on the number of retweets; and, @cite_23 predict whether a burst of new links will occur based on the number of retweets and users' similarity. However, our model is able to learn parameters from real world data, and predict the precise timing of both diffusion and new link events.
{ "cite_N": [ "@cite_80", "@cite_23", "@cite_50" ], "mid": [ "2949362199", "2952108851", "" ], "abstract": [ "Every day millions of users are connected through online social networks, generating a rich trove of data that allows us to study the mechanisms behind human interactions. Triadic closure has been treated as the major mechanism for creating social links: if Alice follows Bob and Bob follows Charlie, Alice will follow Charlie. Here we present an analysis of longitudinal micro-blogging data, revealing a more nuanced view of the strategies employed by users when expanding their social circles. While the network structure affects the spread of information among users, the network is in turn shaped by this communication activity. This suggests a link creation mechanism whereby Alice is more likely to follow Charlie after seeing many messages by Charlie. We characterize users with a set of parameters associated with different link creation strategies, estimated by a Maximum-Likelihood approach. Triadic closure does have a strong effect on link formation, but shortcuts based on traffic are another key factor in interpreting network evolution. However, individual strategies for following other users are highly heterogeneous. Link creation behaviors can be summarized by classifying users in different categories with distinct structural and behavioral characteristics. Users who are popular, active, and influential tend to create traffic-based shortcuts, making the information diffusion process more efficient in the network.", "In online social media systems users are not only posting, consuming, and resharing content, but also creating new and destroying existing connections in the underlying social network. While each of these two types of dynamics has individually been studied in the past, much less is known about the connection between the two. How does user information posting and seeking behavior interact with the evolution of the underlying social network structure? Here, we study ways in which network structure reacts to users posting and sharing content. We examine the complete dynamics of the Twitter information network, where users post and reshare information while they also create and destroy connections. We find that the dynamics of network structure can be characterized by steady rates of change, interrupted by sudden bursts. Information diffusion in the form of cascades of post re-sharing often creates such sudden bursts of new connections, which significantly change users' local network structure. These bursts transform users' networks of followers to become structurally more cohesive as well as more homogenous in terms of follower interests. We also explore the effect of the information content on the dynamics of the network and find evidence that the appearance of new topics and real-world events can lead to significant changes in edge creations and deletions. Lastly, we develop a model that quantifies the dynamics of the network and the occurrence of these bursts as a function of the information spreading through the network. The model can successfully predict which information diffusion events will lead to bursts in network dynamics.", "" ] }
1507.02043
835021485
Today’s cellular telecommunications markets re- quire continuous monitoring and intervention by regulators in order to balance the interests of various stakeholders. In order to reduce the extent of regulatory involvements in the day-to-day business of cellular operators, the present paper proposes a “self- regulating” spectrum market regime named “society spectrum”. This regime provides a market-inherent and automatic self- balancing of stakeholder powers, which at the same time provides a series of coordination and fairness assurance functions that clearly distinguish it from “spectrum as a commons” solutions. The present paper will introduce the fundamental regulatory design and will elaborate on mechanisms to assure fairness among stakeholders and individuals. This work further puts the society spectrum into the context of contemporary radio access technologies and cognitive radio approaches.
Thus we conclude that spectrum represents a unique value for the exclusive holders and that even more restricted secondary usage scenarios are of interest. Moreover, self-imposed industry guidelines will be insufficient to target the complexity of the telecommunications market, especially when assuming limited industry morality (i.e., a useful common assumption). The present work hence aims at establishing a system-inherent self-regulation based on a redesigned spectrum market, which is enacted by regulatory bodies. Following the position in @cite_9 the present work intermingles spectrum markets as commons and exclusively operated alternatives in order to dynamically react to market conditions, and hence to reduce the need for regulatory interventions.
{ "cite_N": [ "@cite_9" ], "mid": [ "95383595" ], "abstract": [ "A collapsible plastic container comprises a pair of side walls and a pair of end walls each of plastic material formed with a plurality of air-circulation openings therethrough; and hinges mounting each end wall between a pair of side walls to form a container in which the walls are hinged for movement either to an open condition or to a collapsed condition. Each of the walls includes internally-extending bottom ledges adapted to receive a bottom wall of plastic material formed with a plurality of air-circulation openings therethrough. The side and end walls are formed with cooperable tongue and slot retainer elements to retain them in their open condition until the bottom wall is applied to the bottom ledges." ] }
1507.01892
2952561707
Many approaches to transform classification problems from non-linear to linear by feature transformation have been recently presented in the literature. These notably include sparse coding methods and deep neural networks. However, many of these approaches require the repeated application of a learning process upon the presentation of unseen data input vectors, or else involve the use of large numbers of parameters and hyper-parameters, which must be chosen through cross-validation, thus increasing running time dramatically. In this paper, we propose and experimentally investigate a new approach for the purpose of overcoming limitations of both kinds. The proposed approach makes use of a linear auto-associative network (called SCNN) with just one hidden layer. The combination of this architecture with a specific error function to be minimized enables one to learn a linear encoder computing a sparse code which turns out to be as similar as possible to the sparse coding that one obtains by re-training the neural network. Importantly, the linearity of SCNN and the choice of the error function allow one to achieve reduced running time in the learning phase. The proposed architecture is evaluated on the basis of two standard machine learning tasks. Its performances are compared with those of recently proposed non-linear auto-associative neural networks. The overall results suggest that linear encoders can be profitably used to obtain sparse data representations in the context of machine learning problems, provided that an appropriate error function is used during the learning phase.
Both networks, ASCNN and SAANN, are trained by means of the mini batch stochastic gradient descent learning algorithm because the error function is differentiable as the penalization term is differentiable. The possibility of identifying valid alternatives to non-linear approaches to sparse coding, in the context of non-linear autoassociative networks with a single hidden layer, is suggested by some classification which were successfully addressed on the basis of linear network approaches. Notably, linear approaches were successfully used to model the early stage responses of the visual system. In the early stage visual information is a small number of simultaneously active neurons among the much larger number of available neurons. The first attempt to model this behaviour of the visual system is due to Olshausen and Field ( @cite_17 ). The authors built a simple single weight layer feed forward neural network (Sparsenet) where the observed data @math are a linear combination of top-bottom basis vectors @math and top-layer sparse responses @math . The sparsity of the solution is obtained by minimizing an error function composed of the standard sum-of-squares error and a regularization term, which can be expressed as follows:
{ "cite_N": [ "@cite_17" ], "mid": [ "2145889472" ], "abstract": [ "THE receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented1–4 and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms5,6. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding7–12. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties13–18, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal8,12 that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs." ] }
1507.01981
773165544
In this paper we look at the problem of scheduling tasks on a single-processor system, where each task requires unit time and must be scheduled within a certain time window, and each task can be added to or removed from the system at any time. On each operation, the system is allowed to reschedule any tasks, but the goal is to minimize the number of rescheduled tasks. Our main result is an allocator that maintains a valid schedule for all tasks in the system if their time windows have constant size and reschedules O(1 *log(1 )) tasks on each insertion as ->0, where is a certain measure of the schedule flexibility of the system. We also show that it is optimal for any allocator that works on arbitrary instances. We also briefly mention a few variants of the problem, such as if the tasks have time windows of difference sizes, for which we have an allocator that we conjecture reschedules only 1 task on each insertion if the schedule flexibility remains above a certain threshold.
Of the wide variety of scheduling problems, completely offline problems from real-world situations tend to be NP-complete even after simplifications. (, 2008) @cite_12 But practical scheduling problems usually involve continuous processes or continual requests, and hence many other scheduling problems are online in some sense. In addition, scheduling problems with intrinsic constraints invariably have to include some means of handling conflicting requests.
{ "cite_N": [ "@cite_12" ], "mid": [ "2137527671" ], "abstract": [ "Uncertainty is a very important concern in production scheduling since it can cause infeasibilities and production disturbances. Thus scheduling under uncertainty has received a lot of attention in the open literature in recent years from chemical engineering and operations research communities. The purpose of this paper is to review the main methodologies that have been developed to address the problem of uncertainty in production scheduling as well as to identify the main challenges in this area. The uncertainties in process scheduling are first analyzed, and the different mathematical approaches that exist to describe process uncertainties are classified. Based on the different descriptions for the uncertainties, alternative scheduling approaches and relevant optimization models are reviewed and discussed. Further research challenges in the field of process scheduling under uncertainty are identified and some new ideas are discussed." ] }
1507.02206
2950657666
Studies on friendships in online social networks involving geographic distance have so far relied on the city location provided in users' profiles. Consequently, most of the research on friendships have provided accuracy at the city level, at best, to designate a user's location. This study analyzes a Twitter dataset because it provides the exact geographic distance between corresponding users. We start by introducing a strong definition of "friend" on Twitter (i.e., a definition of bidirectional friendship), requiring bidirectional communication. Next, we utilize geo-tagged mentions delivered by users to determine their locations, where "@username" is contained anywhere in the body of tweets. To provide analysis results, we first introduce a friend counting algorithm. From the fact that Twitter users are likely to post consecutive tweets in the static mode, we also introduce a two-stage distance estimation algorithm. As the first of our main contributions, we verify that the number of friends of a particular Twitter user follows a well-known power-law distribution (i.e., a Zipf's distribution or a Pareto distribution). Our study also provides the following newly-discovered friendship degree related to the issue of space: The number of friends according to distance follows a double power-law (i.e., a double Pareto law) distribution, indicating that the probability of befriending a particular Twitter user is significantly reduced beyond a certain geographic distance between users, termed the separation point. Our analysis provides concrete evidence that Twitter can be a useful platform for assigning a more accurate scalar value to the degree of friendship between two users.
To understand the nature of friendships online with respect to geographic distance, some efforts have focused on users' online profiles that include their city of residence @cite_8 @cite_32 . @cite_8 , experimental results based on the LiveJournal social network https: www.livejournal.com demonstrated a close relationship between geographic distance and probability distribution of friendship, where the probability of befriending a particular user on LiveJournal is inversely proportional to the positive power of the number of closer users. Contrary to @cite_8 , based on the data collected from Tuenti, https: www.tuenti.com a Spanish social networking service, it was found in @cite_32 that social interactions online are only weakly affected by spatial proximity, with other factors dominating.
{ "cite_N": [ "@cite_32", "@cite_8" ], "mid": [ "2086567932", "2162450625" ], "abstract": [ "Online friendship connections are often not representative of social relationships or shared interest between users, but merely provide a public display of personal identity. A better picture of online social behaviour can be achieved by taking into account the intensity of communication levels between users, yielding useful insights for service providers supporting this communication. Among the several factors impacting user interactions, geographic distance might be affecting how users communicate with their friends. While spatial proximity appears influencing how people connect to each other even on the Web, the relationship between social interaction and spatial distance remains unexplored. In this work we analyse the relationship between online user interactions and geographic proximity with a detailed study of a large Spanish online social service. Our results show that while geographic distance strongly affects how social links are created, spatial proximity plays a negligible role on user interactions. These findings offer new insights on the interplay between social and spatial factors influencing online user behaviour and open new directions for future research and applications.", "We live in a “small world,” where two arbitrary people are likely connected by a short chain of intermediate friends. With scant information about a target individual, people can successively forward a message along such a chain. Experimental studies have verified this property in real social networks, and theoretical models have been advanced to explain it. However, existing theoretical models have not been shown to capture behavior in real-world social networks. Here, we introduce a richer model relating geography and social-network friendship, in which the probability of befriending a particular person is inversely proportional to the number of closer people. In a large social network, we show that one-third of the friendships are independent of geography and the remainder exhibit the proposed relationship. Further, we prove analytically that short chains can be discovered in every network exhibiting the relationship." ] }
1507.02206
2950657666
Studies on friendships in online social networks involving geographic distance have so far relied on the city location provided in users' profiles. Consequently, most of the research on friendships have provided accuracy at the city level, at best, to designate a user's location. This study analyzes a Twitter dataset because it provides the exact geographic distance between corresponding users. We start by introducing a strong definition of "friend" on Twitter (i.e., a definition of bidirectional friendship), requiring bidirectional communication. Next, we utilize geo-tagged mentions delivered by users to determine their locations, where "@username" is contained anywhere in the body of tweets. To provide analysis results, we first introduce a friend counting algorithm. From the fact that Twitter users are likely to post consecutive tweets in the static mode, we also introduce a two-stage distance estimation algorithm. As the first of our main contributions, we verify that the number of friends of a particular Twitter user follows a well-known power-law distribution (i.e., a Zipf's distribution or a Pareto distribution). Our study also provides the following newly-discovered friendship degree related to the issue of space: The number of friends according to distance follows a double power-law (i.e., a double Pareto law) distribution, indicating that the probability of befriending a particular Twitter user is significantly reduced beyond a certain geographic distance between users, termed the separation point. Our analysis provides concrete evidence that Twitter can be a useful platform for assigning a more accurate scalar value to the degree of friendship between two users.
However, the effect of distance on online social interactions has not yet been fully understood. In the previous studies, the geographic location points only to the location of users at a city scale . For this reason, the friendship degree distribution contains a background probability that is independent of geography due to the city-scale resolution @cite_8 @cite_32 . On the other hand, geo-located Twitter can provide high-precision location information down to 10 meters through the Global Positioning System (GPS) interface @cite_17 of users' smart phones while offering comprehensive metadata with a gigantic sample of the whole population.
{ "cite_N": [ "@cite_17", "@cite_32", "@cite_8" ], "mid": [ "", "2086567932", "2162450625" ], "abstract": [ "", "Online friendship connections are often not representative of social relationships or shared interest between users, but merely provide a public display of personal identity. A better picture of online social behaviour can be achieved by taking into account the intensity of communication levels between users, yielding useful insights for service providers supporting this communication. Among the several factors impacting user interactions, geographic distance might be affecting how users communicate with their friends. While spatial proximity appears influencing how people connect to each other even on the Web, the relationship between social interaction and spatial distance remains unexplored. In this work we analyse the relationship between online user interactions and geographic proximity with a detailed study of a large Spanish online social service. Our results show that while geographic distance strongly affects how social links are created, spatial proximity plays a negligible role on user interactions. These findings offer new insights on the interplay between social and spatial factors influencing online user behaviour and open new directions for future research and applications.", "We live in a “small world,” where two arbitrary people are likely connected by a short chain of intermediate friends. With scant information about a target individual, people can successively forward a message along such a chain. Experimental studies have verified this property in real social networks, and theoretical models have been advanced to explain it. However, existing theoretical models have not been shown to capture behavior in real-world social networks. Here, we introduce a richer model relating geography and social-network friendship, in which the probability of befriending a particular person is inversely proportional to the number of closer people. In a large social network, we show that one-third of the friendships are independent of geography and the remainder exhibit the proposed relationship. Further, we prove analytically that short chains can be discovered in every network exhibiting the relationship." ] }
1507.01708
2952971359
Regular path query languages for data graphs are essentially . The lack of type information greatly limits the optimization opportunities for query engines and makes application development more complex. In this paper we discuss a simple, yet expressive, schema language for edge-labelled data graphs. This schema language is, then, used to define a query type inference approach with good precision properties.
TSL is the schema language of Trinity @cite_8 , a main-memory graph processing system based on the Microsoft ecosystem. By using a TSL script, which is compiled in .NET object code, it is possible to specify the structure of nodes, which can have richly defined values, e.g., those required by BFS and DFS algorithms, as well as the type of outgoing edges; apparently, there is no way to describe constraints on incoming edges, which can have any cardinality.
{ "cite_N": [ "@cite_8" ], "mid": [ "2160459668" ], "abstract": [ "Computations performed by graph algorithms are data driven, and require a high degree of random data access. Despite the great progresses made in disk technology, it still cannot provide the level of efficient random access required by graph computation. On the other hand, memory-based approaches usually do not scale due to the capacity limit of single machines. In this paper, we introduce Trinity, a general purpose graph engine over a distributed memory cloud. Through optimized memory management and network communication, Trinity supports fast graph exploration as well as efficient parallel computing. In particular, Trinity leverages graph access patterns in both online and offline computation to optimize memory and communication for best performance. These enable Trinity to support efficient online query processing and offline analytics on large graphs with just a few commodity machines. Furthermore, Trinity provides a high level specification language called TSL for users to declare data schema and communication protocols, which brings great ease-of-use for general purpose graph management and computing. Our experiments show Trinity's performance in both low latency graph queries as well as high throughput graph analytics on web-scale, billion-node graphs." ] }
1507.01708
2952971359
Regular path query languages for data graphs are essentially . The lack of type information greatly limits the optimization opportunities for query engines and makes application development more complex. In this paper we discuss a simple, yet expressive, schema language for edge-labelled data graphs. This schema language is, then, used to define a query type inference approach with good precision properties.
SheX @cite_11 is a schema language for RDF data. As in TSL, in SheX it is possible to describe complex node structures, and, unlike in TSL, outgoing edges can be defined by using regular expressions. However, just as in TSL, there is no way to specify constraints on incoming edges. This means that, for instance, in a schema describing cars and car owners, one can impose the constraint that a single person can own at most @math cars, but not the constraint that a car can have one single owner at a time. This makes impossible to define empty SheX schemas, but it limits the expressivity of the language.
{ "cite_N": [ "@cite_11" ], "mid": [ "2167961018" ], "abstract": [ "We study the expressiveness and complexity of Shape Expression Schema (ShEx), a novel schema formalism for RDF currently under development by W3C. ShEx assigns types to the nodes of an RDF graph and allows to constrain the admissible neighborhoods of nodes of a given type with regular bag expressions (RBEs). We formalize and investigate two alternative semantics, multi-and single-type, depending on whether or not a node may have more than one type. We study the expressive power of ShEx and study the complexity of the validation problem. We show that the single-type semantics is strictly more expressive than the multi-type semantics, single-type validation is generally intractable and multi-type validation is feasible for a small (yet practical) subclass of RBEs. To curb the high computational complexity of validation, we propose a natural notion of determinism and show that multi-type validation for the class of deterministic schemas using single-occurrence regular bag expressions (SORBEs) is tractable." ] }
1507.01191
2952926873
The central result of classical game theory states that every finite normal form game has a Nash equilibrium, provided that players are allowed to use randomized (mixed) strategies. However, in practice, humans are known to be bad at generating random-like sequences, and true random bits may be unavailable. Even if the players have access to enough random bits for a single instance of the game their randomness might be insufficient if the game is played many times. In this work, we ask whether randomness is necessary for equilibria to exist in finitely repeated games. We show that for a large class of games containing arbitrary two-player zero-sum games, approximate Nash equilibria of the @math -stage repeated version of the game exist if and only if both players have @math random bits. In contrast, we show that there exists a class of games for which no equilibrium exists in pure strategies, yet the @math -stage repeated version of the game has an exact Nash equilibrium in which each player uses only a constant number of random bits. When the players are assumed to be computationally bounded, if cryptographic pseudorandom generators (or, equivalently, one-way functions) exist, then the players can base their strategies on "random-like" sequences derived from only a small number of truly random bits. We show that, in contrast, in repeated two-player zero-sum games, if pseudorandom generators exist, then @math random bits remain necessary for equilibria to exist.
In one of the first works to consider the relation between the randomness available to players and the existence of equilibria Halpern and Pass @cite_16 introduced a computational framework of machine games that explicitly incorporates the cost of computation into the utility functions of the players and specifically the possibility of randomness being expensive. They demonstrated this approach on the game of Rock-Paper-Scissors, and showed that in machine games where randomization is costly then Nash equilibria do not necessarily exist. However, in machine games where randomization is free then Nash equilibria always exist.
{ "cite_N": [ "@cite_16" ], "mid": [ "2153807360" ], "abstract": [ "We develop a general game-theoretic framework for reasoning about strategic agents performing possibly costly computation. In this framework, many traditional game-theoretic results (such as the existence of a Nash equilibrium) no longer hold. Nevertheless, we can use the framework to provide psychologically appealing explanations of observed behavior in well-studied games (such as finitely repeated prisoner's dilemma and rock–paper–scissors). Furthermore, we provide natural conditions on games sufficient to guarantee that equilibria exist." ] }
1507.01191
2952926873
The central result of classical game theory states that every finite normal form game has a Nash equilibrium, provided that players are allowed to use randomized (mixed) strategies. However, in practice, humans are known to be bad at generating random-like sequences, and true random bits may be unavailable. Even if the players have access to enough random bits for a single instance of the game their randomness might be insufficient if the game is played many times. In this work, we ask whether randomness is necessary for equilibria to exist in finitely repeated games. We show that for a large class of games containing arbitrary two-player zero-sum games, approximate Nash equilibria of the @math -stage repeated version of the game exist if and only if both players have @math random bits. In contrast, we show that there exists a class of games for which no equilibrium exists in pure strategies, yet the @math -stage repeated version of the game has an exact Nash equilibrium in which each player uses only a constant number of random bits. When the players are assumed to be computationally bounded, if cryptographic pseudorandom generators (or, equivalently, one-way functions) exist, then the players can base their strategies on "random-like" sequences derived from only a small number of truly random bits. We show that, in contrast, in repeated two-player zero-sum games, if pseudorandom generators exist, then @math random bits remain necessary for equilibria to exist.
Based on derandomization techniques, Kalyanaraman and Umans @cite_3 proposed randomness efficient algorithms both for finding equilibria and for playing strategic games. In the context of finitely repeated two-player zero-sum games where one of the players (referred to as the learner) is uninformed of the payoff matrix, they gave an adaptive on-line algorithm for the learner that can reuse randomness over the stages of the repeated game.
{ "cite_N": [ "@cite_3" ], "mid": [ "1564454755" ], "abstract": [ "We study multiplayer games in which the participants have access to only limited randomness. This constrains both the algorithms used to compute equilibria (they should use little or no randomness) as well as the mixed strategies that the participants are capable of playing (these should be sparse). We frame algorithmic questions that naturally arise in this setting, and resolve several of them." ] }
1507.01441
749411540
It is known that in various random matrix models, large perturbations create outlier eigenvalues which lie, asymptotically, in the complement of the support of the limiting spectral density. This thesis studies fluctuations of these outlier eigenvalues of iid matrices @math under bounded rank and bounded operator norm perturbations @math , namely the fluctuations @math . The perturbations @math that we consider belong to a large class, where we allow for arbitrary Jordan types and almost minimal assumptions on the left and right eigenvectors. We obtain the joint convergence of the normalized asymptotic fluctuations of the outlier eigenvalues in this setting with a unified approach.
In @cite_17 , the outlier eigenvalues of perturbations of the single ring model are studied and their locations and limiting fluctuations are obtained ( [Theorem 2.9] rochet ) for finite rank and finite operator norm perturbations of arbitrary Jordan type. Note that the special case of the Ginibre ensemble, which is an iid matrix, is contained in this model as well. Our approach to dealing with perturbations of various Jordan types is similar and relies on a deterministic perturbation result known as the Lidskii-Vishik-Lyusternik perturbation theorem (see @cite_15 , @cite_14 , @cite_1 and references therein) which we have reproduced in Appendix .
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_1", "@cite_17" ], "mid": [ "", "2060581589", "2015321063", "2951358417" ], "abstract": [ "", "In this paper we study the distribution of eigenvalues for two sets of random Hermitian matrices and one set of random unitary matrices. The statement of the problem as well as its method of investigation go back originally to the work of Dyson [i] and I. M. Lifsic [2], [3] on the energy spectra of disordered systems, although in their probability character our sets are more similar to sets studied by Wigner [4]. Since the approaches to the sets we consider are the same, we present in detail only the most typical case. The corresponding results for the other two cases are presented without proof in the last section of the paper. §1. Statement of the problem and survey of results We shall consider as acting in iV-dimensiona l unitary space v, a selfadjoint operator BN (re) of the form", "Let A be a complex matrix with arbitrary Jordan structure and @math an eigenvalue of A whose largest Jordan block has size n. We review previous results due to Lidskii [U.S.S. R. Comput. Math. and Math. Phys., 1 (1965), pp. 73--85], showing that the splitting of @math under a small perturbation of A of order @math is, generically, of order @math . Explicit formulas for the leading coefficients are obtained, involving the perturbation matrix and the eigenvectors of A. We also present an alternative proof of Lidskii's main theorem, based on the use of the Newton diagram. This approach clarifies certain difficulties which arise in the nongeneric case and leads, in some situations, to the extension of Lidskii's results. These results suggest a new notion of Holder condition number for multiple eigenvalues, depending only on the associated left and right eigenvectors, appropriately normalized, not on the Jordan vectors.", "This text is about spiked models of non Hermitian random matrices. More specifically, we consider matrices of the type @math , where the rank of @math stays bounded as the dimension goes to infinity and where the matrix @math is a non Hermitian random matrix, satisfying an isotropy hypothesis: its distribution is invariant under the left and right actions of the unitary group. The macroscopic eigenvalue distribution of such matrices is governed by the so called Single Ring Theorem, due to Guionnet, Krishnapur and Zeitouni. We first prove that if @math has some eigenvalues out of the maximal circle of the single ring, then @math has some eigenvalues (called outliers) in the neighborhood of those of @math , which is not the case for the eigenvalues of @math in the inner cycle of the single ring. Then, we study the fluctuations of the outliers of @math around the eigenvalues of @math and prove that they are distributed as the eigenvalues of some finite dimensional random matrices. Such facts had already been noticed for Hermitian models. More surprising facts are that outliers can here have very various rates of convergence to their limits (depending on the Jordan Canonical Form of @math ) and that some correlations can appear between outliers at a macroscopic distance from each other (a fact already noticed by Knowles and Yin in the Hermitian case, but only in the case of non Gaussian models, whereas spiked Gaussian matrices belong to our model and can have such correlated outliers). Our first result generalizes a previous result by Tao for matrices with i.i.d. entries, whereas the second one (about the fluctuations) is new." ] }
1507.01441
749411540
It is known that in various random matrix models, large perturbations create outlier eigenvalues which lie, asymptotically, in the complement of the support of the limiting spectral density. This thesis studies fluctuations of these outlier eigenvalues of iid matrices @math under bounded rank and bounded operator norm perturbations @math , namely the fluctuations @math . The perturbations @math that we consider belong to a large class, where we allow for arbitrary Jordan types and almost minimal assumptions on the left and right eigenvectors. We obtain the joint convergence of the normalized asymptotic fluctuations of the outlier eigenvalues in this setting with a unified approach.
In @cite_4 , Bordenave and Captaine study asymptotic outlier locations and fluctuations for perturbed iid matrices. The perturbations considered there are of the form @math where @math is of bounded rank and @math (with possibly unbounded rank) satisfies a well-conditioning property. In the case of local perturbations, where @math has a finite nonzero block @math at the top-left, [Theorems 1.7 and 1.8] bc obtain the limiting normalized outlier fluctuation when @math and when @math under the hypothesis of bounded fourth moments.
{ "cite_N": [ "@cite_4" ], "mid": [ "1540506616" ], "abstract": [ "We consider a square random matrix of size N of the form A + Y where A is deterministic and Y has iid entries with variance 1 N. Under mild assumptions, as N grows, the empirical distribution of the eigenvalues of A+Y converges weakly to a limit probability measure on the complex plane. This work is devoted to the study of the outlier eigenvalues, i.e. eigenvalues in the complement of the support of . Even in the simplest cases, a variety of interesting phenomena can occur. As in earlier works, we give a sufficient condition to guarantee that outliers are stable and provide examples where their fluctuations vary with the particular distribution of the entries of Y or the Jordan decomposition of A. We also exhibit concrete examples where the outlier eigenvalues converge in distribution to the zeros of a Gaussian analytic function." ] }
1507.01443
768237947
The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.
A majority of previous work has been devoted to using metadata for matching fields @cite_4 . These methods include exact and inexact @cite_21 matching of field names, synonym-based matching @cite_9 , and other language-based analyses @cite_12 . These methods assume a coherence between named fields and are likely to perform poorly if the same data is called, say, Customer Name in one data set and Guest ID in another.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_4", "@cite_12" ], "mid": [ "1555156738", "2122604280", "2406114359", "2139135093" ], "abstract": [ "Automating semantic matching of attributes for the purpose of information integration is challenging, and the dynamics of the Web further exacerbate this problem. Believing that many facets of metadata can contribute to a resolution, we present a framework for multifaceted exploitation of metadata in which we gather information about potential matches from various facets of metadata and combine this information to generate and place confidence values on potential attribute matches. To make the framework apply in the highly dynamic Web environment, we base our process largely on machine learning. Experiments we have conducted are encouraging, showing that when the combination of facets converges as expected, the results are highly reliable.", "Most data integration applications require a matching between the schemas of the respective data sets. We show how the existence of duplicates within these data sets can be exploited to automatically identify matching attributes. We describe an algorithm that first discovers duplicates among data sets with unaligned schemas and then uses these duplicates to perform schema matching between schemas with opaque column names. Discovering duplicates among data sets with unaligned schemas is more difficult than in the usual setting, because it is not clear which fields in one object should be compared with which fields in the other. We have developed a new algorithm that efficiently finds the most likely duplicates in such a setting. Now, our schema matching algorithm is able to identify corresponding attributes by comparing data values within those duplicate records. An experimental study on real-world data shows the effectiveness of this approach.", "In a paper published in the 2001 VLDB Conference, we proposed treating generic schema matching as an independent problem. We developed a taxonomy of existing techniques, a new schema matching algorithm, and an approach to comparative evaluation. Since then, the field has grown into a major research topic. We briefly summarize the new techniques that have been developed and applications of the techniques in the commercial world. We conclude by discussing future trends and recommendations for further work.", "Schema matching is a critical step in many applications, such as XML message mapping, data warehouse loading, and schema integration. In this paper, we investigate algorithms for generic schema matching, outside of any particular data model or application. We first present a taxonomy for past solutions, showing that a rich range of techniques is available. We then propose a new algorithm, Cupid, that discovers mappings between schema elements based on their names, data types, constraints, and schema structure, using a broader set of techniques than past approaches. Some of our innovations are the integrated use of linguistic and structural matching, context-dependent matching of shared types, and a bias toward leaf structure where much of the schema content resides. After describing our algorithm, we present experimental results that compare Cupid to two other schema matching systems." ] }
1507.01443
768237947
The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.
More recently, especially as researchers have shifted their focus from databases to ontologies, additional emphasis has been placed on exploiting the relationships among fields (also called concepts in the ontology context), such as is-a and has-a relationships. Because these methods are applied to expert-developed ontologies (e.g., different anatomy ontologies) there are generally only a few available instances for each field. Methods exist to leverage known matched instances for schema matching @cite_20 . Such matched pairs provide a significant advantage in finding schema matches. In many applications, including typical cross-organizational data integration efforts, the existence of common referents cannot be assumed. Furthermore, even if such common referents exist, finding them is itself a highly challenging research problem. Our method does not depend on having coreferents.
{ "cite_N": [ "@cite_20" ], "mid": [ "2144607742" ], "abstract": [ "A common task in many database applications is the migration of legacy data from multiple sources into a new one. This requires identifying semantically related elements of the source and target systems and the creation of mapping expressions to transform instances of those elements from the source format to the target format. Currently, data migration is typically done manually, a tedious and timeconsuming process, which is difficult to scale to a high number of data sources. In this paper, we describe QuickMig, a new semi-automatic approach to determining semantic correspondences between schema elements for data migration applications. QuickMig advances the state of the art with a set of new techniques exploiting sample instances, domain ontologies, and reuse of existing mappings to detect not only element correspondences but also their mapping expressions. QuickMig further includes new mechanisms to effectively incorporate domain knowledge of users into the matching process. The results from a comprehensive evaluation using real-world schemas and data indicate the high quality and practicability of the overall approach." ] }
1507.01490
1946850312
Closeness is an important centrality measure widely used in the analysis of real-world complex networks. In particular, the problem of selecting the k most central nodes with respect to this measure has been deeply analyzed in the last decade. However, even for not very large networks, this problem is computationally intractable in practice: indeed, have recently shown that its complexity is strictly related to the complexity of the All-Pairs Shortest Path (in short, APSP) problem, for which no subcubic "combinatorial" algorithm is known. In this paper, we propose a new algorithm for selecting the k most closeness central nodes in a graph. In practice, this algorithm significantly improves over the APSP approach, even though its worst-case time complexity is the same. For example, the algorithm is able to compute the top k nodes in few dozens of seconds even when applied to real-world networks with millions of nodes and edges. We will also experimentally prove that our algorithm drastically outperforms the most recently designed algorithm, proposed by Finally, we apply the new algorithm to the computation of the most central actors in the IMDB collaboration network, where two actors are linked if they played together in a movie.
Other approaches have tried to develop incremental algorithms, that might be more suited to real-world networks analyses. For instance, in @cite_0 , the authors develop heuristics to determine the @math most central vertices in a varying environment. A different work addressed the problem of updating centralities after edge insertion or deletion @cite_12 : for instance, it is shown that it is possible to update the closeness centrality of @math million authors in the DBLP-coauthorship network @math times faster than recomputing it from scratch.
{ "cite_N": [ "@cite_0", "@cite_12" ], "mid": [ "2117260657", "2048468623" ], "abstract": [ "A well known way to find the most central nodes in a network consists of coupling random walk sampling (or one of its variants) with a method to identify the most central nodes in the subgraph induced by the samples. Although it is commonly assumed that degree information is collected during the sampling step, in previous works this information has not been used at the identification step [10], [18]. In this paper, we showed that using degree information at the identification step in a very naive way, namely setting the degree as an alias to other centrality metrics, yields promising results.", "Centrality metrics have shown to be highly correlated with the importance and loads of the nodes within the network traffic. In this work, we provide fast incremental algorithms for closeness centrality computation. Our algorithms efficiently compute the closeness centrality values upon changes in network topology, i.e., edge insertions and deletions. We show that the proposed techniques are efficient on many real-life networks, especially on small-world networks, which have a small diameter and spike-shaped shortest distance distribution. We experimentally validate the efficiency of our algorithms on large-scale networks and show that they can update the closeness centrality values of 1.2 million authors in the temporal DBLP-coauthorship network 460 times faster than it would take to recompute them from scratch." ] }
1507.01196
2950454061
Deterministic constructions of expander graphs have been an important topic of research in computer science and mathematics, with many well-studied constructions of infinite families of expanders. In some applications, though, an infinite family is not enough: we need expanders which are "close" to each other. We study the following question: Construct an an infinite sequence of expanders @math , such that for every two consecutive graphs @math and @math , @math can be obtained from @math by adding a single vertex and inserting removing a small number of edges, which we call the expansion cost of transitioning from @math to @math . This question is very natural, e.g., in the context of datacenter networks, where the vertices represent racks of servers, and the expansion cost captures the amount of rewiring needed when adding another rack to the network. We present an explicit construction of @math -regular expanders with expansion cost at most @math , for any @math . Our construction leverages the notion of a "2-lift" of a graph. This operation was first analyzed by Bilu and Linial, who repeatedly applied 2-lifts to construct an infinite family of expanders which double in size from one expander to the next. Our construction can be viewed as a way to "interpolate" between Bilu-Linial expanders with low expansion cost while preserving good edge expansion throughout. While our main motivation is centralized (datacenter networks), we also get the best-known distributed expander construction in the "self-healing" model.
The deterministic explicit construction of expanders is a prominent research area in both mathematics and computer science. See the survey of Hoory, Linial, and Wigderson @cite_2 . Our approach relies on the seminal paper of Bilu and Linial @cite_0 , which proposed and studied the notion of @math -lifting a graph. They proved that when starting with any good'' expander, a random @math -lift results in another good expander and, moreover, that this can be derandomized. Thus @cite_0 provides a means to deterministically construct an infinite sequence of expanders: start with a good expander and repeatedly 2-lift. All expanders in this sequence are proven to be quasi-Ramanujan graphs, and are conjectured to be Ramanujan graphs (i.e., have optimal spectral expansion). Marcus, Spielman, and Srivastava @cite_4 recently showed that this is indeed essentially true for expanders.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_2" ], "mid": [ "2962992640", "2950953939", "" ], "abstract": [ "Let G be a graph on n vertices. A 2-lift of G is a graph H on 2n vertices, with a covering map � : H → G. It is not hard to see that all eigenvalues of G are also eigenvalues of H. In addition, H has n “new” eigenvalues. We conjecture that every d-regular graph has a 2lift such that all new eigenvalues are in the range [−2 √ d − 1,2 √ d − 1] (If true, this is tight , e.g. by the Alon-Boppana bound). Here we show that every graph of maximal degree d has a 2-lift such that all", "We prove that there exist infinite families of regular bipartite Ramanujan graphs of every degree bigger than 2. We do this by proving a variant of a conjecture of Bilu and Linial about the existence of good 2-lifts of every graph. We also establish the existence of infinite families of irregular Ramanujan' graphs, whose eigenvalues are bounded by the spectral radius of their universal cover. Such families were conjectured to exist by Linial and others. In particular, we prove the existence of infinite families of (c,d)-biregular bipartite graphs with all non-trivial eigenvalues bounded by sqrt c-1 +sqrt d-1 , for all c, d 3. Our proof exploits a new technique for demonstrating the existence of useful combinatorial objects that we call the \"method of interlacing polynomials'\".", "" ] }
1507.01625
2618912166
Cryptographic protocols, such as protocols for secure function evaluation (SFE), have played a crucial role in the development of modern cryptography. The extensive theory of these protocols, however, deals almost exclusively with classical attackers. If we accept that quantum information processing is the most realistic model of physically feasible computation, then we must ask: What classical protocols remain secure against quantum attackers? Our main contribution is showing the existence of classical two-party protocols for the secure evaluation of any polynomial-time function under reasonable computational assumptions (for example, it suffices that the learning with errors problem be hard for quantum polynomial time). Our result shows that the basic two-party feasibility picture from classical cryptography remains unchanged in a quantum world.
Composition Frameworks for Quantum Protocols Systematic investigations of the composition properties of quantum protocols are relatively recent. Canetti's UC framework and Pfitzmann and Waidner's closely related framework were extended to the world of quantum protocols and adversaries by Ben-Or and Mayers @cite_12 and Unruh @cite_56 @cite_43 . These frameworks (which share similar semantics) provide extremely strong guarantees---security in arbitrary network environments. They were used to analyze a number of unconditionally secure quantum protocols (key exchange @cite_62 and multi-party computation with honest majorities @cite_21 ). However, many protocols are not universally composable, and Canetti @cite_29 showed that classical protocols cannot UC-securely realize even basic tasks such as commitment and zero-knowledge proofs without some additional setup assumptions such as a CRS or public-key infrastructure.
{ "cite_N": [ "@cite_62", "@cite_29", "@cite_21", "@cite_56", "@cite_43", "@cite_12" ], "mid": [ "1787342803", "1499934958", "2076168063", "", "1867273832", "1588798039" ], "abstract": [ "The existing unconditional security definitions of quantum key distribution (QKD) do not apply to joint attacks over QKD and the subsequent use of the resulting key. In this paper, we close this potential security gap by using a universal composability theorem for the quantum setting. We first derive a composable security definition for QKD. We then prove that the usual security definition of QKD still implies the composable security definition. Thus, a key produced in any QKD protocol that is unconditionally secure in the usual definition can indeed be safely used, a property of QKD that is hitherto unproven. We propose two other useful sufficient conditions for composability. As a simple application of our result, we show that keys generated by repeated runs of QKD degrade slowly.", "We propose a novel paradigm for defining security of cryptographic protocols, called universally composable security. The salient property of universally composable definitions of security is that they guarantee security even when a secure protocol is composed of an arbitrary set of protocols, or more generally when the protocol is used as a component of an arbitrary system. This is an essential property for maintaining security of cryptographic protocols in complex and unpredictable environments such as the Internet. In particular, universally composable definitions guarantee security even when an unbounded number of protocol instances are executed concurrently in an adversarially controlled manner, they guarantee non-malleability with respect to arbitrary protocols, and more. We show how to formulate universally composable definitions of security for practically any cryptographic task. Furthermore, we demonstrate that practically any such definition can be realized using known techniques, as long as only a minority of the participants are corrupted. We then proceed to formulate universally composable definitions of a wide array of cryptographic tasks, including authenticated and secure communication, key-exchange, public-key encryption, signature, commitment, oblivious transfer, zero knowledge and more. We also make initial steps towards studying the realizability of the proposed definitions in various settings.", "Secret sharing and multiparty computation (also called \"secure function evaluation\") are fundamental primitives in modern cryptography, allowing a group of mutually distrustful players to perform correct, distributed computations under the sole assumption that some number of them will follow the protocol honestly. This paper investigates how much trust is necessary -- that is, how many players must remain honest -- in order for distributed quantum computations to be possible. We present a verifiable quantum secret sharing (VQSS) protocol, and a general secure multiparty quantum computation (MPQC) protocol, which can tolerate any [ n - 1 2 ] cheaters among n players. Previous protocols for these tasks tolerated [ n - 1 4 ] and [ n - 1 6 ] cheaters, respectively. The threshold we achieve is tight -- even in the classical case, \"fair\" multiparty computation is not possible if any set of n 2 players can cheat. Our protocols rely on approximate quantum errorcorrecting codes, which can tolerate a larger fraction of errors than traditional, exact codes. We introduce new families of authentication schemes and approximate codes tailored to the needs of our protocols, as well as new state purification techniques along the lines of those used in faulttolerant quantum circuits.", "", "We propose a new security measure for commitment protocols, called Universally Composable (UC) Commitment. The measure guarantees that commitment protocols behave like an \"ideal commitment service,\" even when concurrently composed with an arbitrary set of protocols. This is a strong guarantee: it implies that security is maintained even when an unbounded number of copies of the scheme are running concurrently, it implies non-malleability (not only with respect to other copies of the same protocol but even with respect to other protocols), it provides resilience to selective decommitment, and more. Unfortunately, two-party uc commitment protocols do not exist in the plain model. However, we construct two-party uc commitment protocols, based on general complexity assumptions, in the common reference string model where all parties have access to a common string taken from a predetermined distribution. The protocols are non-interactive, in the sense that both the commitment and the opening phases consist of a single message from the committer to the receiver.", "We generalize the universally composable definition of Canetti to the Quantum World. The basic idea is the same as in the classical world. The main contribution is that we unfold the result in a new model which is well adapted to quantum protocols. We also simplify some aspects of the classical case. In particular, the case of protocols with an arbitrary number of layers of sub-protocols is naturally covered in the proposed model." ] }
1507.01625
2618912166
Cryptographic protocols, such as protocols for secure function evaluation (SFE), have played a crucial role in the development of modern cryptography. The extensive theory of these protocols, however, deals almost exclusively with classical attackers. If we accept that quantum information processing is the most realistic model of physically feasible computation, then we must ask: What classical protocols remain secure against quantum attackers? Our main contribution is showing the existence of classical two-party protocols for the secure evaluation of any polynomial-time function under reasonable computational assumptions (for example, it suffices that the learning with errors problem be hard for quantum polynomial time). Our result shows that the basic two-party feasibility picture from classical cryptography remains unchanged in a quantum world.
Straight-Line Simulators and Code-Based Games As mentioned above, we introduce simple hybrid arguments'' to capture a class of straightforward security analyses that go through against quantum adversaries. Several formalisms have been introduced in the past to capture classes of simple'' security arguments. To our knowledge, none of them is automatically compatible with quantum adversaries. For example, @cite_8 do not rewind the adversary nor use an explicit description of its random coins; however, it may be the case that rewinding is necessary to prove that the straight-line simulator is actually correct. In a different vein, the of Bellare and Rogaway @cite_66 capture a class of hybrid arguments that can be encoded in a clean formal language; again, however, the arguments concerning each step of the hybrid may still require rewinding.
{ "cite_N": [ "@cite_66", "@cite_8" ], "mid": [ "2167606175", "1977802313" ], "abstract": [ "We show that, in the ideal-cipher model, triple encryption (the cascade of three independently-keyed blockciphers) is more secure than single or double encryption, thereby resolving a long-standing open problem. Our result demonstrates that for DES parameters (56-bit keys and 64-bit plaintexts) an adversary's maximal advantage against triple encryption is small until it asks about 278 queries. Our proof uses code-based game-playing in an integral way, and is facilitated by a framework for such proofs that we provide.", "We investigate the question of whether the security of protocols in the information-theoretic setting (where the adversary is computationally unbounded) implies the security of these protocols under concurrent composition. This question is motivated by the folklore that all known protocols that are secure in the information-theoretic setting are indeed secure under concurrent composition. We provide answers to this question for a number of different settings (i.e., considering perfect versus statistical security, and concurrent composition with adaptive versus fixed inputs). Our results enhance the understanding of what is necessary for obtaining security under composition, as well as providing tools (i.e., composition theorems) that can be used for proving the security of protocols under composition while considering only the standard stand-alone definitions of security." ] }
1507.01082
1736664825
In the advent of the Internet, web-mediated social networking has become of great influence to Filipinos. Networking sites such as Friendster, YouTube, FaceBook and MySpace are among the most well known sites on the Internet. These sites provide a wide range of services to users from different parts of the world, such as connecting and finding people, as well as, sharing and organizing contents. The popularity and accessibility of these sites enable information to be available. These allow people to analyze and study the characteristics of the population of online social networks. In this study, we developed a computer program to analyze the structural dynamics of a locally popular social networking site: The Friendster Network. Understanding the structural dynamics of a virtual community has many implications, such as finding an improvement on the current networking system, among others. Based on our analysis, we found out that users of the site exhibit preferential attachment to users with high number of friends.
During the time where the network of movie actors have been studied, people have already shown huge interest on the different structural properties such as degree distribution, scale-free and small-world characteristics of networks. This was followed by studies of different kinds of networks like the scientific collaboration and the human sexual contacts networks. However, the studies conducted were based on a small-scale analysis and it is said that the relationships between these kinds of networks differ from that of a normal friend relationship. Just recently, the number of online social networks has significantly increased making it possible to study huge social networks directly. However, it is observed that these huge networks' analysis are more focused on the cultural and business viewpoints only @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2121761994" ], "abstract": [ "Social networking services are a fast-growing business in the Internet. However, it is unknown if online relationships and their growth patterns are the same as in real-life social networks. In this paper, we compare the structures of three online social networking services: Cyworld, MySpace, and orkut, each with more than 10 million users, respectively. We have access to complete data of Cyworld's ilchon (friend) relationships and analyze its degree distribution, clustering property, degree correlation, and evolution over time. We also use Cyworld data to evaluate the validity of snowball sampling method, which we use to crawl and obtain partial network topologies of MySpace and orkut. Cyworld, the oldest of the three, demonstrates a changing scaling behavior over time in degree distribution. The latest Cyworld data's degree distribution exhibits a multi-scaling behavior, while those of MySpace and orkut have simple scaling behaviors with different exponents. Very interestingly, each of the two e ponents corresponds to the different segments in Cyworld's degree distribution. Certain online social networking services encourage online activities that cannot be easily copied in real life; we show that they deviate from close-knit online social networks which show a similar degree correlation pattern to real-life social networks." ] }
1507.01082
1736664825
In the advent of the Internet, web-mediated social networking has become of great influence to Filipinos. Networking sites such as Friendster, YouTube, FaceBook and MySpace are among the most well known sites on the Internet. These sites provide a wide range of services to users from different parts of the world, such as connecting and finding people, as well as, sharing and organizing contents. The popularity and accessibility of these sites enable information to be available. These allow people to analyze and study the characteristics of the population of online social networks. In this study, we developed a computer program to analyze the structural dynamics of a locally popular social networking site: The Friendster Network. Understanding the structural dynamics of a virtual community has many implications, such as finding an improvement on the current networking system, among others. Based on our analysis, we found out that users of the site exhibit preferential attachment to users with high number of friends.
There have already been previous studies related to online social networks. The first one was a study of four sites: Flickr, YouTube, Orkut and LiveJournal. The data set consisted of about 1.8 million users from Flickr, 5.2 million users from LiveJournal, 3 million users from Orkut, and 1.1 million users from YouTube. The study showed that the structure of social networks and its characteristics differ from those networks mentioned earlier. It was found that online social networks have more links and are highly clustered. Nodes with high number of links towards them also have a high probability of having a high number of links coming from them. These online social networks are composed of clusters which are highly connected. However, these clusters are composed of nodes with low number of links. This resulted to the inversely proportional values of the clustering coefficient with respect to the number of links of each node. Although the path lengths are short, most paths passed through nodes which are highly connected @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2115022330" ], "abstract": [ "Online social networking sites like Orkut, YouTube, and Flickr are among the most popular sites on the Internet. Users of these sites form a social network, which provides a powerful means of sharing, organizing, and finding content and contacts. The popularity of these sites provides an opportunity to study the characteristics of online social network graphs at large scale. Understanding these graphs is important, both to improve current systems and to design new applications of online social networks. This paper presents a large-scale measurement study and analysis of the structure of multiple online social networks. We examine data gathered from four popular online social networks: Flickr, YouTube, LiveJournal, and Orkut. We crawled the publicly accessible user links on each site, obtaining a large portion of each social network's graph. Our data set contains over 11.3 million users and 328 million links. We believe that this is the first study to examine multiple online social networks at scale. Our results confirm the power-law, small-world, and scale-free properties of online social networks. We observe that the indegree of user nodes tends to match the outdegree; that the networks contain a densely connected core of high-degree nodes; and that this core links small groups of strongly clustered, low-degree nodes at the fringes of the network. Finally, we discuss the implications of these structural properties for the design of social network based systems." ] }
1507.01082
1736664825
In the advent of the Internet, web-mediated social networking has become of great influence to Filipinos. Networking sites such as Friendster, YouTube, FaceBook and MySpace are among the most well known sites on the Internet. These sites provide a wide range of services to users from different parts of the world, such as connecting and finding people, as well as, sharing and organizing contents. The popularity and accessibility of these sites enable information to be available. These allow people to analyze and study the characteristics of the population of online social networks. In this study, we developed a computer program to analyze the structural dynamics of a locally popular social networking site: The Friendster Network. Understanding the structural dynamics of a virtual community has many implications, such as finding an improvement on the current networking system, among others. Based on our analysis, we found out that users of the site exhibit preferential attachment to users with high number of friends.
Another one investigated on the topological characteristics of huge online social networking services. The structures of three online social networking services, Cyworld, MySpace, and Orkut were compared. The number of examined users was 100,000 for each social networking site. Results showed that these networks follow the power-law distribution having a heavy tail. Based on the analysis of the degree distribution of Cyworld, researchers found out that it supported the claim that the diversity of the types of users greatly affects different network characteristics such as clustering coefficient, evolution of the network size, average path length and the network's diameter. The results of the analysis of MySpace and Orkut followed the patterns found in the different regions of the Cyworld network @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2121761994" ], "abstract": [ "Social networking services are a fast-growing business in the Internet. However, it is unknown if online relationships and their growth patterns are the same as in real-life social networks. In this paper, we compare the structures of three online social networking services: Cyworld, MySpace, and orkut, each with more than 10 million users, respectively. We have access to complete data of Cyworld's ilchon (friend) relationships and analyze its degree distribution, clustering property, degree correlation, and evolution over time. We also use Cyworld data to evaluate the validity of snowball sampling method, which we use to crawl and obtain partial network topologies of MySpace and orkut. Cyworld, the oldest of the three, demonstrates a changing scaling behavior over time in degree distribution. The latest Cyworld data's degree distribution exhibits a multi-scaling behavior, while those of MySpace and orkut have simple scaling behaviors with different exponents. Very interestingly, each of the two e ponents corresponds to the different segments in Cyworld's degree distribution. Certain online social networking services encourage online activities that cannot be easily copied in real life; we show that they deviate from close-knit online social networks which show a similar degree correlation pattern to real-life social networks." ] }
1507.01127
2125786288
We present , a system to learn embeddings for synsets and lexemes. It is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The synset lexeme embeddings obtained live in the same vector space as the word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet as a lexical resource, but AutoExtend can be easily applied to other resources like Freebase. AutoExtend achieves state-of-the-art performance on word similarity and word sense disambiguation tasks.
We used the SCWS dataset for the word similarity task, as it provides a context. Other frequently used datasets are WordSim-353 @cite_5 or MEN @cite_3 .
{ "cite_N": [ "@cite_5", "@cite_3" ], "mid": [ "2067438047", "2112184938" ], "abstract": [ "Keyword-based search engines are in widespread use today as a popular means for Web-based information retrieval. Although such systems seem deceptively simple, a considerable amount of skill is required in order to satisfy non-trivial information needs. This paper presents a new conceptual paradigm for performing search in context, that largely automates the search process, providing even non-professional users with highly relevant results. This paradigm is implemented in practice in the IntelliZap system, where search is initiated from a text query marked by the user in a document she views, and is guided by the text surrounding the marked query in that document (“the context”). The context-driven information retrieval process involves semantic keyword extraction and clustering to automatically generate new, augmented queries. The latter are submitted to a host of general and domain-specific search engines. Search results are then semantically reranked, using context. Experimental results testify that using context to guide search, effectively offers even inexperienced users an advanced search tool on the Web.", "Distributional semantic models derive computational representations of word meaning from the patterns of co-occurrence of words in text. Such models have been a success story of computational linguistics, being able to provide reliable estimates of semantic relatedness for the many semantic tasks requiring them. However, distributional models extract meaning information exclusively from text, which is an extremely impoverished basis compared to the rich perceptual sources that ground human semantic knowledge. We address the lack of perceptual grounding of distributional models by exploiting computer vision techniques that automatically identify discrete \"visual words\" in images, so that the distributional representation of a word can be extended to also encompass its co-occurrence with the visual words of images it is associated with. We propose a flexible architecture to integrate text- and image-based distributional information, and we show in a set of empirical tests that our integrated model is superior to the purely text-based approach, and it provides somewhat complementary semantic information with respect to the latter." ] }
1507.00825
2949793883
This paper discusses the effect of hubness in zero-shot learning, when ridge regression is used to find a mapping between the example space to the label space. Contrary to the existing approach, which attempts to find a mapping from the example space to the label space, we show that mapping labels into the example space is desirable to suppress the emergence of hubs in the subsequent nearest neighbor search step. Assuming a simple data model, we prove that the proposed approach indeed reduces hubness. This was verified empirically on the tasks of bilingual lexicon extraction and image labeling: hubness was reduced with both of these tasks and the accuracy was improved accordingly.
The first use of ridge regression in ZSL can be found in the work of Palatucci et al. @cite_16 . Ridge regression has since been one of the standard approaches to ZSL, especially for natural language processing tasks: phrase generation @cite_21 and bilingual lexicon extraction @cite_21 @cite_3 @cite_22 . More recently, neural networks have been used for learning non-linear mapping @cite_25 @cite_24 . All of the regression-based methods listed above, including those based on neural networks, map source objects into the target space.
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_3", "@cite_24", "@cite_16", "@cite_25" ], "mid": [ "2126725946", "2251676919", "1542713999", "", "2150295085", "2123024445" ], "abstract": [ "Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90 precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs.", "We introduce the problem of generation in distributional semantics: Given a distributional vector representing some meaning, how can we generate the phrase that best expresses that meaning? We motivate this novel challenge on theoretical and practical grounds and propose a simple data-driven approach to the estimation of generation functions. We test this in a monolingual scenario (paraphrase generation) as well as in a cross-lingual setting (translation by synthesizing adjectivenoun phrase vectors in English and generating the equivalent expressions in Italian).", "The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels. We show that the neighbourhoods of the mapped elements are strongly polluted by hubs, vectors that tend to be near a high proportion of items, pushing their correct labels down the neighbour list. After illustrating the problem empirically, we propose a simple method to correct it by taking the proximity distribution of potential neighbours across many mapped vectors into account. We show that this correction leads to consistent improvements in realistic zero-shot experiments in the cross-lingual, image labeling and image retrieval domains.", "", "We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.", "Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model." ] }
1507.00825
2949793883
This paper discusses the effect of hubness in zero-shot learning, when ridge regression is used to find a mapping between the example space to the label space. Contrary to the existing approach, which attempts to find a mapping from the example space to the label space, we show that mapping labels into the example space is desirable to suppress the emergence of hubs in the subsequent nearest neighbor search step. Assuming a simple data model, we prove that the proposed approach indeed reduces hubness. This was verified empirically on the tasks of bilingual lexicon extraction and image labeling: hubness was reduced with both of these tasks and the accuracy was improved accordingly.
ZSL can also be formulated as a problem of (CCA). Hardoon et. al. @cite_6 used CCA and kernelized CCA for image labeling. Lazaridou et. al. @cite_9 compared ridge regression, CCA, singular value decomposition, and neural networks in image labeling. In our experiments (Sect. ), we use CCA as one of the baseline methods for comparison.
{ "cite_N": [ "@cite_9", "@cite_6" ], "mid": [ "2252238675", "2100235303" ], "abstract": [ "Following up on recent work on establishing a mapping between vector-based semantic embeddings of words and the visual representations of the corresponding objects from natural images, we first present a simple approach to cross-modal vector-based semantics for the task of zero-shot learning, in which an image of a previously unseen object is mapped to a linguistic representation denoting its word. We then introduce fast mapping, a challenging and more cognitively plausible variant of the zero-shot task, in which the learner is exposed to new objects and the corresponding words in very limited linguistic contexts. By combining prior linguistic and visual knowledge acquired about words and their objects, as well as exploiting the limited new evidence available, the learner must learn to associate new objects with words. Our results on this task pave the way to realistic simulations of how children or robots could use existing knowledge to bootstrap grounded semantic knowledge about new concepts.", "We present a general method using kernel canonical correlation analysis to learn a semantic representation to web images and their associated text. The semantic space provides a common representation and enables a comparison between the text and images. In the experiments, we look at two approaches of retrieving images based on only their content from a text query. We compare orthogonalization approaches against a standard cross-representation retrieval technique known as the generalized vector space model." ] }
1507.00956
2124664518
Approximately ten percent of newborns require some help with their breathing at birth. About one percent require extensive assistance at birth which needs to be administered by trained personnel. Neonatal resuscitation is taught through a simulation based training program in North America. Such a training methodology is cost and resource intensive which reduces its availability thereby adversely impacting skill acquisition and retention. We implement and present RETAIN (REsuscitation TrAIning for Neonatal residents) -- a video game to complement the existing neonatal training. Being a video game, RETAIN runs on ubiquitous off-the-shelf hardware and can be easily accessed by trainees almost anywhere at their convenience. Thus we expect RETAIN to help trainees retain and retrain their resuscitation skills. We also report on how RETAIN was developed by an interdisciplinary team of six undergraduate students as a three-month term project for a second year university course.
Simulation, which originated from aviation and spaceflight training programs, was adopted by anesthesiologists in the 1960s which eventually led to the development of simulation-based medical education @cite_13 @cite_20 . Medical education and, in particular, surgical training has been an active area for the development of both simulations and serious games @cite_11 @cite_1 . In the following sections, we review games relevant to the current work.
{ "cite_N": [ "@cite_1", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "179374799", "2123356291", "1990539893", "2023126559" ], "abstract": [ "The rising popularity of video games has seen a recent push towards the application of serious games to medical education and training. With their ability to engage players learners for a specific purpose, serious games provide an opportunity to acquire cognitive and technical surgical skills outside the operating room thereby optimizing operating room exposure with live patients. However, before the application of serious games for surgical education and training becomes more widespread, there are a number of open questions and issues that must be addressed including the relationship between fidelity, multi-modal cue interaction, immersion, and knowledge transfer and retention. In this chapter we begin with a brief overview of alternative medical surgical educational methods, followed by a discussion of serious games and their application to surgical education, fidelity, multi-modal cue interaction and their role within a virtual simulations serious games. The chapter ends with a description of the serious games surgical cognitive education and training framework (SCETF) and concluding remarks.", "The educational potential of a computer-controlled patient simulator was tested by the University of Southern California School of Medicine. The results of the experiment suggest unequivocally that there is a twofold advantage to the use of such a simulator in training anesthesiology residents in the skill of endotracheal intubation: (a) residents achieve proficiency levels in a smaller number of elapsed days of training, thus effecting a saving of time in the training of personnel, and (b) residents achieve a proficiency level in a smaller number of trials in the operating room, thus posing significantly less threat to patient safety. The small number of subjects in the study and the large within-group variability were responsible for a lack of statistical significance in 4 of 6 of the analyses performed; however, all differences were substantial and in the hypothesized direction. Thus, despite the narrowly circumscribed tasks to be learned by the experimental subjects, the findings suggest that the use of simulation devices should be considered in planning for future education and training not only in medicine but in other health care professions as well.", "Human error is believed to contribute to the majority of negative anesthesia outcomes. Because retrospective analysis of critical incidents has several shortcomings and prospective studies are limited by the low frequency of critical incidents, an anesthesia simulator was used to evaluate the manage", "Background: The application of digital games for training medical professionals is on the rise. So-called ‘serious’ games form training tools that provide a challenging simulated environment, ideal for future surgical training. Ultimately, serious games are directed at reducing medical error and subsequent healthcare costs. The aim was to review current serious games for training medical professionals and to evaluate the validity testing of such games. Methods: PubMed, Embase, the Cochrane Database of Systematic Reviews, PsychInfo and CINAHL were searched using predefined inclusion criteria for available studies up to April 2012. The primary endpoint was validation according to current criteria. Results: A total of 25 articles were identified, describing a total of 30 serious games. The games were divided into two categories: those developed for specific educational purposes (17) and commercial games also useful for developing skills relevant to medical personnel (13). Pooling of data was not performed owing to the heterogeneity of study designs and serious games. Six serious games were identified that had a process of validation. Of these six, three games were developed for team training in critical care and triage, and three were commercially available games applied to train laparoscopic psychomotor skills. None of the serious games had completed a full validation process for the purpose of use. Conclusion: Blended and interactive learning by means of serious games may be applied to train both technical and non-technical skills relevant to the surgical field. Games developed or used for this purpose need validation before integration into surgical teaching curricula." ] }
1507.00956
2124664518
Approximately ten percent of newborns require some help with their breathing at birth. About one percent require extensive assistance at birth which needs to be administered by trained personnel. Neonatal resuscitation is taught through a simulation based training program in North America. Such a training methodology is cost and resource intensive which reduces its availability thereby adversely impacting skill acquisition and retention. We implement and present RETAIN (REsuscitation TrAIning for Neonatal residents) -- a video game to complement the existing neonatal training. Being a video game, RETAIN runs on ubiquitous off-the-shelf hardware and can be easily accessed by trainees almost anywhere at their convenience. Thus we expect RETAIN to help trainees retain and retrain their resuscitation skills. We also report on how RETAIN was developed by an interdisciplinary team of six undergraduate students as a three-month term project for a second year university course.
e-Baby @cite_7 is a serious game in which players perform clinical assessment of oxygenation on preterm infants in a virtual isolette. The infants present a range of respiratory impairments from mild to serious. The players were provided with patient history and had to select appropriate tools for clinical assessment. The assessment was made by responding to a series of questions in a multiple-choice format. The questions drove the interaction and served as an assessment of the trainee's knowledge. The game was evaluated by nursing students who had free access to the simulation and was rated highly for its ease of use and as overall efficacy of learning. The goal of e-Baby was acquisition of procedural knowledge pertaining to the clinical assessment. Our goal is to create a game that trains medical personnel on the application of pre-existing knowledge of clinical intervention (resuscitation) in stressful conditions.
{ "cite_N": [ "@cite_7" ], "mid": [ "1664203118" ], "abstract": [ "Objective: to evaluate students opinion regarding e-Baby educational technology. Methodology: exploratory descriptive study in which participated a sample composed of 14 nursing Portuguese students that used e-Baby digital educational technology in an extracurricular course. To achieve the aim of the study, the data collection was realized through an opinion instrument in Likert scale including the possibility of commentaries by students. Is was also collected data of participants’ characterization. Results: students made very satisfactory evaluations regarding the game e-Baby, varying since usability acceptation through suggestions of expansion of the game to other nursing themes. Conclusion: serious game e-Baby can be considered a didactic innovation and motivator tool of learning. Besides, it demonstrates have adequate interface in design and educative function aspects, evocating intense interaction between user and computational tool." ] }
1507.00956
2124664518
Approximately ten percent of newborns require some help with their breathing at birth. About one percent require extensive assistance at birth which needs to be administered by trained personnel. Neonatal resuscitation is taught through a simulation based training program in North America. Such a training methodology is cost and resource intensive which reduces its availability thereby adversely impacting skill acquisition and retention. We implement and present RETAIN (REsuscitation TrAIning for Neonatal residents) -- a video game to complement the existing neonatal training. Being a video game, RETAIN runs on ubiquitous off-the-shelf hardware and can be easily accessed by trainees almost anywhere at their convenience. Thus we expect RETAIN to help trainees retain and retrain their resuscitation skills. We also report on how RETAIN was developed by an interdisciplinary team of six undergraduate students as a three-month term project for a second year university course.
LISSA @cite_24 @cite_31 @cite_30 is a serious game to teach cardiopulmonary resuscitation (CPR) and use of an automated external defibrillator. Players must perform CPR procedures in the correct order within a specified time limit. The system supports play and authoring modes. Emergency scenarios are authored from a predefined set of elements, and can be complemented with expositional material (e.g., a demonstration of how to apply CPR). Scenarios are modeled as finite state machines corresponding to a CPR flowchart. LISSA was evaluated with @math learners with no background in CPR, and four CPR instructors. Although it was found to lead to lower learning outcomes compared to conventional instruction alone, LISSA was shown to have a higher efficacy when used to complement mannequin-based instruction. Although relevant to our problem, LISSA differs in a number of key aspects: it targets adult cardiopulmonary resuscitation rather than neonatal resuscitation, and is intended for a general audience rather than clinical trainees. LISSA also aims to teach motor skills via a use of Kinect @cite_31 which is beyond the scope of our problem (decision-making skills).
{ "cite_N": [ "@cite_24", "@cite_31", "@cite_30" ], "mid": [ "2026645181", "120105342", "2056774546" ], "abstract": [ "Abstract Maintaining and restoring health is a basic aspect of well being. On the other hand, serious games is an emerging technology growing in importance for specialized training, taking advantage of 3D games and game engines in order to improve the realistic experience of users. Thus, according to the advancement of technology and the desire to achieve good health using an interesting and enjoyable way, different serious games for health have been proposed during the last few years. In this paper, we present the core process of serious games and explain their functionalities. Then, we survey more than one hundred serious games for health and propose new classifications in four different aspects. Finally, we use fifteen relevant characteristics to classify all the surveyed games and present them with plenty of graphs and charts with corresponding discussion.", "Cardiopulmonary Resuscitation (CPR) training is a crucial procedure to reduce the decease from cardiac arrest in pre–hospital situation. Due to the importance of CPR its knowledge is required not only by professions prescribing CPR certification such as fire fighter, life guard, police or daycare, but also by laypersons. To learn CPR skill, practice is highly recommended and 3D simulators with effective interaction tools are one of the best options to practice CPR anywhere and anytime. In this paper, we present a pilot study in developing a Kinect-based system focusing on two key parameters of the CPR procedure: the chest compression rate and correct arm pose, implemented in our existing CPR training system, LIfe Support Simulation Application (LISSA). Our system falls into the category of markerless tracking using commercial depth–cameras, making the proposed method flexible and economic. We also present a comparison with different CPR feedback systems with regard to the chest compression rate and correct arm pose.", "" ] }
1507.00956
2124664518
Approximately ten percent of newborns require some help with their breathing at birth. About one percent require extensive assistance at birth which needs to be administered by trained personnel. Neonatal resuscitation is taught through a simulation based training program in North America. Such a training methodology is cost and resource intensive which reduces its availability thereby adversely impacting skill acquisition and retention. We implement and present RETAIN (REsuscitation TrAIning for Neonatal residents) -- a video game to complement the existing neonatal training. Being a video game, RETAIN runs on ubiquitous off-the-shelf hardware and can be easily accessed by trainees almost anywhere at their convenience. Thus we expect RETAIN to help trainees retain and retrain their resuscitation skills. We also report on how RETAIN was developed by an interdisciplinary team of six undergraduate students as a three-month term project for a second year university course.
Triage Trainer @cite_27 is a serious game designed to teach major incident triage to clinical professionals. Developed to be played on a desktop or a laptop computer, the game allows its players to practice triage (prioritizing which patients to treat when) in a realistic immersive 3D environment. Players navigate and interact with casualties using the mouse and keyboard. Assessment is done by clicking on a series of icons representing various examinations (e.g., breathing check, pulse rate check) and manipulations (e.g., open airway, tag a casualty with triage rating). The focus of the game is on rapid execution of process-based knowledge. The authors found that participants who played the game had significantly greater accuracy on a triage task than did participants who took part in the control activity (card sort). Although it addresses clinical decision-making under pressure, Triage Trainer deals with the domain of mass casualty triage, not neonatal resuscitation.
{ "cite_N": [ "@cite_27" ], "mid": [ "2130159928" ], "abstract": [ "Abstract Objective By exploiting video games technology, serious games strive to deliver affordable, accessible and usable interactive virtual worlds, supporting applications in training, education, marketing and design. The aim of the present study was to evaluate the effectiveness of such a serious game in the teaching of major incident triage by comparing it with traditional training methods. Design Pragmatic controlled trial. Method During Major Incident Medical Management and Support Courses, 91 learners were randomly distributed into one of two training groups: 44 participants practiced triage sieve protocol using a card-sort exercise, whilst the remaining 47 participants used a serious game. Following the training sessions, each participant undertook an evaluation exercise, whereby they were required to triage eight casualties in a simulated live exercise. Performance was assessed in terms of tagging accuracy (assigning the correct triage tag to the casualty), step accuracy (following correct procedure) and time taken to triage all casualties. Additionally, the usability of both the card-sort exercise and video game were measured using a questionnaire. Results Tagging accuracy by participants who underwent the serious game training was significantly higher than those who undertook the card-sort exercise [Chi2=13.126, p =0.02]. Step accuracy was also higher in the serious game group but only for the numbers of participants that followed correct procedure when triaging all eight casualties [Chi2=5.45, p =0.0196]. There was no significant difference in time to triage all casualties (card-sort=435±74s vs video game=456±62s, p =0.155). Conclusion Serious game technologies offer the potential to enhance learning and improve subsequent performance when compared to traditional educational methods." ] }
1507.00662
2952768013
Given an n-vertex digraph D = (V, A) the Max-k-Ordering problem is to compute a labeling @math maximizing the number of forward edges, i.e. edges (u,v) such that @math (u) 0 @math @math n^ (1 k ) @math S_v Z ^+ @math 4 2 ( 2 +1) 2.344 @math 2 2 2.828 @math k [n]$.
For the vertex deletion version of , @cite_10 gave linear time and quadratic time algorithms for rooted trees and series-parallel graphs respectively. The problem reduces to vertex cover on @math -uniform hypergraphs for any constant @math thereby admitting a @math -approximation, and a matching @math -inapproximability assuming the UGC was obtained by Svensson @cite_3 .
{ "cite_N": [ "@cite_10", "@cite_3" ], "mid": [ "2079297844", "2950822364" ], "abstract": [ "Examines the vertex deletion problem for weighted directed acyclic graphs (WDAGs). The objective is to delete the fewest number of vertices so that the resulting WDAG has no path of length > spl delta . Several simplified versions of this problem are shown to be NP-hard. However, the problem is solved in linear time when the WDAG is a rooted tree, and in quadratic time when the WDAG is a series-parallel graph. >", "Assuming the Unique Games Conjecture, we show strong inapproximability results for two natural vertex deletion problems on directed graphs: for any integer @math and arbitrary small @math , the Feedback Vertex Set problem and the DAG Vertex Deletion problem are inapproximable within a factor @math even on graphs where the vertices can be almost partitioned into @math solutions. This gives a more structured and therefore stronger UGC-based hardness result for the Feedback Vertex Set problem that is also simpler (albeit using the \"It Ain't Over Till It's Over\" theorem) than the previous hardness result. In comparison to the classical Feedback Vertex Set problem, the DAG Vertex Deletion problem has received little attention and, although we think it is a natural and interesting problem, the main motivation for our inapproximability result stems from its relationship with the classical Discrete Time-Cost Tradeoff Problem. More specifically, our results imply that the deadline version is NP-hard to approximate within any constant assuming the Unique Games Conjecture. This explains the difficulty in obtaining good approximation algorithms for that problem and further motivates previous alternative approaches such as bicriteria approximations." ] }
1507.01053
1923211482
Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition . All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks , along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.
In @cite_59 , the location-based attention mechanism was successfully used to model and generate handwritten text. In @cite_9 @cite_71 , a neural network is designed to use the location-based attention mechanism to recognize objects in an image. Furthermore, a generative model of images was proposed in @cite_28 , which iteratively reads and writes portions of the whole image using the location-based attention mechanism. Earlier works on utilizing the attention mechanism, both content-based and location-based, for object recognition tracking can be found in @cite_73 @cite_37 @cite_4 .
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_28", "@cite_9", "@cite_59", "@cite_71", "@cite_73" ], "mid": [ "2154071538", "1975998725", "1850742715", "1484210532", "1810943226", "2147527908", "2141399712" ], "abstract": [ "We discuss an attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of perception, the model consists of two interacting pathways, identity and control, intended to mirror the what and where pathways in neuroscience models. The identity pathway models object appearance and performs classification using deep (factored)-restricted Boltzmann machines. At each point in time, the observations consist of foveated images, with decaying resolution toward the periphery of the gaze. The control pathway models the location, orientation, scale, and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the control pathway, we encounter an attentional mechanism that learns to select gazes so as to minimize tracking uncertainty. Unlike in our previous work, we introduce gaze selection strategies that operate in the presence of partial information and on a continuous action space. We show that a straightforward extension of the existing approach to the partial information setting results in poor performance, and we propose an alternative method based on modeling the reward surface as a gaussian process. This approach gives good performance in the presence of partial information and allows us to expand the action space from a small, discrete set of fixation points to a continuous domain.", "Tasks that require the synchronization of perception and action are incredibly hard and pose a fundamental challenge to the fields of machine learning and computer vision. One important example of such a task is the problem of performing visual recognition through a sequence of controllable fixations; this requires jointly deciding what inference to perform from fixations and where to perform these fixations. While these two problems are challenging when addressed separately, they become even more formidable if solved jointly. Recently, a restricted Boltzmann machine (RBM) model was proposed that could learn meaningful fixation policies and achieve good recognition performance. In this paper, we propose an alternative approach based on a feed-forward, auto-regressive architecture, which permits exact calculation of training gradients (given the fixation sequence), unlike for the RBM model. On a problem of facial expression recognition, we demonstrate the improvement gained by this alternative approach. Additionally, we investigate several variations of the model in order to shed some light on successful strategies for fixation-based recognition.", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.", "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles.", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.", "We describe a model based on a Boltzmann machine with third-order connections that can learn how to accumulate information about a shape over several fixations. The model uses a retina that only has enough high resolution pixels to cover a small area of the image, so it must decide on a sequence of fixations and it must combine the \"glimpse\" at each fixation with the location of the fixation before integrating the information with information from other glimpses of the same object. We evaluate this model on a synthetic dataset and two image classification datasets, showing that it can perform at least as well as a model trained on whole images." ] }
1507.01053
1923211482
Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. In this paper we focus on the case where the input also has a rich structure and the input and output structures are somehow related. We describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition . All these systems are based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks , along with trained attention mechanisms. We report on experimental results with these systems, showing impressively good performance and the advantage of the attention mechanism.
The attention-based mechanim described in this paper, or its variant, may be applied to something other than multimedia input. For instance, in @cite_40 , a neural Turing machine was proposed, which implements a memory controller using both the content-based and location-based attention mechanisms. Similarly, the authors of @cite_52 used the content-based attention mechanism with hard decision (see, e.g., Eq. ) to find relevant memory contents, which was futher extended to the weakly supervised memory network in @cite_56 in Sec. .
{ "cite_N": [ "@cite_40", "@cite_52", "@cite_56" ], "mid": [ "2167839676", "2951008357", "2209647458" ], "abstract": [ "We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.", "We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.", "" ] }
1507.00436
1460713219
This paper proposes an online transfer framework to capture the interaction among agents and shows that current transfer learning in reinforcement learning is a special case of online transfer. Furthermore, this paper re-characterizes existing agents-teaching-agents methods as online transfer and analyze one such teaching method in three ways. First, the convergence of Q-learning and Sarsa with tabular representation with a finite budget is proven. Second, the convergence of Q-learning and Sarsa with linear function approximation is established. Third, the we show the asymptotic performance cannot be hurt through teaching. Additionally, all theoretical results are empirically validated.
Transfer learning in reinforcement domain has been studies recently @cite_22 @cite_19 . Lazaric introduces a transfer learning framework which inspires us to develop the online transfer learning framework and classifies transfer learning in reinforcement domain into three categories: instance transfer, representation transfer and parameter transfer @cite_19 . The action advice model is a method of instance transfer due to explicit action advice (i.e., sample transfer). Lazaric proposed an instance-transfer method which selectively transfers samples on the basis of the similarity between source and target tasks @cite_20 .
{ "cite_N": [ "@cite_19", "@cite_22", "@cite_20" ], "mid": [ "158722652", "2097381042", "2004030284" ], "abstract": [ "Transfer in reinforcement learning is a novel research area that focuses on the development of methods to transfer knowledge from a set of source tasks to a target task. Whenever the tasks are similar, the transferred knowledge can be used by a learning algorithm to solve the target task and significantly improve its performance (e.g., by reducing the number of samples needed to achieve a nearly optimal performance). In this chapter we provide a formalization of the general transfer problem, we identify the main settings which have been investigated so far, and we review the most important approaches to transfer in reinforcement learning.", "The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.", "The main objective of transfer in reinforcement learning is to reduce the complexity of learning the solution of a target task by effectively reusing the knowledge retained from solving a set of source tasks. In this paper, we introduce a novel algorithm that transfers samples (i.e., tuples 〈s, a, s', r〉) from source to target tasks. Under the assumption that tasks have similar transition models and reward functions, we propose a method to select samples from the source tasks that are mostly similar to the target task, and, then, to use them as input for batch reinforcement-learning algorithms. As a result, the number of samples an agent needs to collect from the target task to learn its solution is reduced. We empirically show that, following the proposed approach, the transfer of samples is effective in reducing the learning complexity, even when some source tasks are significantly different from the target task." ] }
1507.00436
1460713219
This paper proposes an online transfer framework to capture the interaction among agents and shows that current transfer learning in reinforcement learning is a special case of online transfer. Furthermore, this paper re-characterizes existing agents-teaching-agents methods as online transfer and analyze one such teaching method in three ways. First, the convergence of Q-learning and Sarsa with tabular representation with a finite budget is proven. Second, the convergence of Q-learning and Sarsa with linear function approximation is established. Third, the we show the asymptotic performance cannot be hurt through teaching. Additionally, all theoretical results are empirically validated.
Zhao and Hoi propose an online transfer learning framework in supervised learning @cite_13 , aiming to transfer useful knowledge from some source domain to an online learning task on a target domain. They introduce a framework to solve transfer in two different settings. The first is that source tasks share the same domain as target tasks and the second is that the source domain and target domain are different domain.
{ "cite_N": [ "@cite_13" ], "mid": [ "43161092" ], "abstract": [ "In this paper, we investigate a new machine learning framework called Online Transfer Learning (OTL) that aims to transfer knowledge from some source domain to an online learning task on a target domain. We do not assume the target data follows the same class or generative distribution as the source data, and our key motivation is to improve a supervised online learning task in a target domain by exploiting the knowledge that had been learned from large amount of training data in source domains. OTL is in general challenging since data in both domains not only can be different in their class distributions but can be also different in their feature representations. As a first attempt to this problem, we propose techniques to address two kinds of OTL tasks: one is to perform OTL in a homogeneous domain, and the other is to perform OTL across heterogeneous domains. We show the mistake bounds of the proposed OTL algorithms, and empirically examine their performance on several challenging OTL tasks. Encouraging results validate the efficacy of our techniques." ] }